Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
NASA Astrophysics Data System (ADS)
Veerraju, R. P. S. P.; Rao, A. Srinivasa; Murali, G.
2010-10-01
Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. It improves internal code structure without altering its external functionality by transforming functions and rethinking algorithms. It is an iterative process. Refactoring include reducing scope, replacing complex instructions with simpler or built-in instructions, and combining multiple statements into one statement. By transforming the code with refactoring techniques it will be faster to change, execute, and download. It is an excellent best practice to adopt for programmers wanting to improve their productivity. Refactoring is similar to things like performance optimizations, which are also behavior- preserving transformations. It also helps us find bugs when we are trying to fix a bug in difficult-to-understand code. By cleaning things up, we make it easier to expose the bug. Refactoring improves the quality of application design and implementation. In general, three cases concerning refactoring. Iterative refactoring, Refactoring when is necessary, Not refactor. Mr. Martin Fowler identifies four key reasons to refractor. Refactoring improves the design of software, makes software easier to understand, helps us find bugs and also helps in executing the program faster. There is an additional benefit of refactoring. It changes the way a developer thinks about the implementation when not refactoring. There are the three types of refactorings. 1) Code refactoring: It often referred to simply as refactoring. This is the refactoring of programming source code. 2) Database refactoring: It is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. 3) User interface (UI) refactoring: It is a simple change to the UI which retains its semantics. Finally, we conclude the benefits of Refactoring are: Improves the design of software, Makes software easier to understand, Software gets cleaned up and Helps us to find bugs and Helps us to program faster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veerraju, R. P. S. P.; Rao, A. Srinivasa; Murali, G.
2010-10-26
Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. It improves internal code structure without altering its external functionality by transforming functions and rethinking algorithms. It is an iterative process. Refactoring include reducing scope, replacing complex instructions with simpler or built-in instructions, and combining multiple statements into one statement. By transforming the code with refactoring techniques it will be faster to change, execute, and download. It is an excellent best practice to adopt for programmers wanting to improve their productivity. Refactoring is similar to things like performance optimizations,more » which are also behavior- preserving transformations. It also helps us find bugs when we are trying to fix a bug in difficult-to-understand code. By cleaning things up, we make it easier to expose the bug. Refactoring improves the quality of application design and implementation. In general, three cases concerning refactoring. Iterative refactoring, Refactoring when is necessary, Not refactor.Mr. Martin Fowler identifies four key reasons to refractor. Refactoring improves the design of software, makes software easier to understand, helps us find bugs and also helps in executing the program faster. There is an additional benefit of refactoring. It changes the way a developer thinks about the implementation when not refactoring. There are the three types of refactorings. 1) Code refactoring: It often referred to simply as refactoring. This is the refactoring of programming source code. 2) Database refactoring: It is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. 3) User interface (UI) refactoring: It is a simple change to the UI which retains its semantics. Finally, we conclude the benefits of Refactoring are: Improves the design of software, Makes software easier to understand, Software gets cleaned up and Helps us to find bugs and Helps us to program faster.« less
Refactoring affordances in corporate wikis: a case for the use of mind maps
NASA Astrophysics Data System (ADS)
Puente, Gorka; Díaz, Oscar; Azanza, Maider
2015-11-01
The organisation of corporate wikis tends to deteriorate as time goes by. Rearranging categories, structuring articles and even moving sections among articles are cumbersome tasks in current wiki engines. This discourages the layman. But, it is the layman who writes the articles, knows the wiki content and detects refactoring opportunities. Our goal is to improve the refactoring affordances of current wiki engines by providing an alternative front-end tuned to refactoring. This is achieved by (1) surfacing the structure of the wiki corpus as a mind map, and (2) conducting refactoring as mind map reshaping. To this end, we introduce WikiWhirl, a domain-specific language for wiki refactoring. WikiWhirl is supported as an extension of FreeMind, a popular mind mapping tool. In this way, refactoring operations are intuitively conducted as actions upon mind map nodes. In a refactoring session a user imports the wiki structure as a FreeMind map; next, conducts the refactoring operations on the map, and finally, the effects are saved in the wiki database. The operational semantics of the WikiWhirl operations follow refactoring good practices (e.g., authorship preservation). Results from a controlled experiment suggest that WikiWhirl outperforms MediaWiki in three main affordance enablers: understandability, productivity and fulfillment of refactoring good practices.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Chirstopher O.; Kleb, Bil
2010-01-01
This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, William L.
2013-01-01
This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintain ability by eliminating the requirement for problem dependent recompilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil
2011-01-01
This users manual provides in-depth information concerning installation and execution of Laura, version 5. Laura is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 Laura code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, Laura now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil
2009-01-01
This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multiphysics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil
2009-01-01
This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multiphysics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Astrophysics Data System (ADS)
Amarnath, N. S.; Pound, M. W.; Wolfire, M. G.
The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 4 years. DIRT uses results from a number of numerical models of astrophysical processes, and has an AWT based user interface. DIRT has been refactored to decouple data representation from plotting and curve fitting. This makes it easier to add new kinds of astrophysical models, use the plotter in other applications, migrate the user interface to Swing components, and modify the user interface to add functionality (for example, SIRTF tools). DIRT is now an extension of two generic libraries, one of which manages data representation and caching, and the second of which manages plotting and curve fitting. This project is an example of refactoring with no impact on user interface, so the existing user community was not affected.
Quandt, Erik M; Hammerling, Michael J; Summers, Ryan M; Otoupal, Peter B; Slater, Ben; Alnahhas, Razan N; Dasgupta, Aurko; Bachman, James L; Subramanian, Mani V; Barrick, Jeffrey E
2013-06-21
The widespread use of caffeine (1,3,7-trimethylxanthine) and other methylxanthines in beverages and pharmaceuticals has led to significant environmental pollution. We have developed a portable caffeine degradation operon by refactoring the alkylxanthine degradation (Alx) gene cluster from Pseudomonas putida CBB5 to function in Escherichia coli. In the process, we discovered that adding a glutathione S-transferase from Janthinobacterium sp. Marseille was necessary to achieve N 7 -demethylation activity. E. coli cells with the synthetic operon degrade caffeine to the guanine precursor, xanthine. Cells deficient in de novo guanine biosynthesis that contain the refactored operon are ″addicted″ to caffeine: their growth density is limited by the availability of caffeine or other xanthines. We show that the addicted strain can be used as a biosensor to measure the caffeine content of common beverages. The synthetic N-demethylation operon could be useful for reclaiming nutrient-rich byproducts of coffee bean processing and for the cost-effective bioproduction of methylxanthine drugs.
Application of Design Patterns in Refactoring Software Design
NASA Technical Reports Server (NTRS)
Baggs. Rjpda; Shaykhian, Gholam Ali
2007-01-01
Refactoring software design is a method of changing software design while explicitly preserving its unique design functionalities. Presented approach is to utilize design patterns as the basis for refactoring software design. Comparison of a design solution will be made through C++ programming language examples to exploit this approach. Developing reusable component will be discussed, the paper presents that the construction of such components can diminish the added burden of both refactoring and the use of design patterns.
Apply Design Patterns to Refactor Software Design
NASA Technical Reports Server (NTRS)
Baggs, Rhoda; Shaykhian, Gholam Ali
2007-01-01
Refactoring software design is a method of changing software design while explicitly preserving its unique design functionalities. Presented approach is to utilize design patterns as the basis for refactoring software design. Comparison of a design solution will be made through C++ programming language examples to exploit this approach. Developing reusable component will be discussed, the paper presents that the construction of such components can diminish the added burden of both refactoring and the use of design patterns.
Refactored M13 Bacteriophage as a Platform for Tumor Cell Imaging and Drug Delivery
MOSER, FELIX; ENDY, DREW; BELCHER, ANGELA M.
2014-01-01
M13 bacteriophage is a well-characterized platform for peptide display. The utility of the M13 display platform is derived from the ability to encode phage protein fusions with display peptides at the genomic level. However, the genome of the phage is complicated by overlaps of key genetic elements. These overlaps directly couple the coding sequence of one gene to the coding or regulatory sequence of another, making it difficult to alter one gene without disrupting the other. Specifically, overlap of the end of gene VII and the beginning of gene IX has prevented the functional genomic modification of the N-terminus of p9. By redesigning the M13 genome to physically separate these overlapping genetic elements, a process known as “refactoring,” we enabled independent manipulation of gene VII and gene IX and the construction of the first N-terminal genomic modification of p9 for peptide display. We demonstrate the utility of this refactored genome by developing an M13 bacteriophage-based platform for targeted imaging of and drug delivery to prostate cancer cells in vitro. This successful use of refactoring principles to reengineer a natural biological system strengthens the suggestion that natural genomes can be rationally designed for a number of applications. PMID:23656279
Refactored M13 bacteriophage as a platform for tumor cell imaging and drug delivery.
Ghosh, Debadyuti; Kohli, Aditya G; Moser, Felix; Endy, Drew; Belcher, Angela M
2012-12-21
M13 bacteriophage is a well-characterized platform for peptide display. The utility of the M13 display platform is derived from the ability to encode phage protein fusions with display peptides at the genomic level. However, the genome of the phage is complicated by overlaps of key genetic elements. These overlaps directly couple the coding sequence of one gene to the coding or regulatory sequence of another, making it difficult to alter one gene without disrupting the other. Specifically, overlap of the end of gene VII and the beginning of gene IX has prevented the functional genomic modification of the N-terminus of p9. By redesigning the M13 genome to physically separate these overlapping genetic elements, a process known as "refactoring," we enabled independent manipulation of gene VII and gene IX and the construction of the first N-terminal genomic modification of p9 for peptide display. We demonstrate the utility of this refactored genome by developing an M13 bacteriophage-based platform for targeted imaging of and drug delivery to prostate cancer cells in vitro. This successful use of refactoring principles to re-engineer a natural biological system strengthens the suggestion that natural genomes can be rationally designed for a number of applications.
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Baggs, Rhoda
2007-01-01
In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.
Toward a Formal Evaluation of Refactorings
NASA Technical Reports Server (NTRS)
Paul, John; Kuzmina, Nadya; Gamboa, Ruben; Caldwell, James
2008-01-01
Refactoring is a software development strategy that characteristically alters the syntactic structure of a program without changing its external behavior [2]. In this talk we present a methodology for extracting formal models from programs in order to evaluate how incremental refactorings affect the verifiability of their structural specifications. We envision that this same technique may be applicable to other types of properties such as those that concern the design and maintenance of safety-critical systems.
Ren, Hengqian; Hu, Pingfan; Zhao, Huimin
2017-08-01
Pathway refactoring serves as an invaluable synthetic biology tool for natural product discovery, characterization, and engineering. However, the complicated and laborious molecular biology techniques largely hinder its application in natural product research, especially in a high-throughput manner. Here we report a plug-and-play pathway refactoring workflow for high-throughput, flexible pathway construction, and expression in both Escherichia coli and Saccharomyces cerevisiae. Biosynthetic genes were firstly cloned into pre-assembled helper plasmids with promoters and terminators, resulting in a series of expression cassettes. These expression cassettes were further assembled using Golden Gate reaction to generate fully refactored pathways. The inclusion of spacer plasmids in this system would not only increase the flexibility for refactoring pathways with different number of genes, but also facilitate gene deletion and replacement. As proof of concept, a total of 96 pathways for combinatorial carotenoid biosynthesis were built successfully. This workflow should be generally applicable to different classes of natural products produced by various organisms. Biotechnol. Bioeng. 2017;114: 1847-1854. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies.
Rahmani, Elior; Zaitlen, Noah; Baran, Yael; Eng, Celeste; Hu, Donglei; Galanter, Joshua; Oh, Sam; Burchard, Esteban G; Eskin, Eleazar; Zou, James; Halperin, Eran
2016-05-01
In epigenome-wide association studies (EWAS), different methylation profiles of distinct cell types may lead to false discoveries. We introduce ReFACTor, a method based on principal component analysis (PCA) and designed for the correction of cell type heterogeneity in EWAS. ReFACTor does not require knowledge of cell counts, and it provides improved estimates of cell type composition, resulting in improved power and control for false positives in EWAS. Corresponding software is available at http://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html.
Christen, Matthias; Deutsch, Samuel; Christen, Beat
2015-08-21
Recent advances in synthetic biology have resulted in an increasing demand for the de novo synthesis of large-scale DNA constructs. Any process improvement that enables fast and cost-effective streamlining of digitized genetic information into fabricable DNA sequences holds great promise to study, mine, and engineer genomes. Here, we present Genome Calligrapher, a computer-aided design web tool intended for whole genome refactoring of bacterial chromosomes for de novo DNA synthesis. By applying a neutral recoding algorithm, Genome Calligrapher optimizes GC content and removes obstructive DNA features known to interfere with the synthesis of double-stranded DNA and the higher order assembly into large DNA constructs. Subsequent bioinformatics analysis revealed that synthesis constraints are prevalent among bacterial genomes. However, a low level of codon replacement is sufficient for refactoring bacterial genomes into easy-to-synthesize DNA sequences. To test the algorithm, 168 kb of synthetic DNA comprising approximately 20 percent of the synthetic essential genome of the cell-cycle bacterium Caulobacter crescentus was streamlined and then ordered from a commercial supplier of low-cost de novo DNA synthesis. The successful assembly into eight 20 kb segments indicates that Genome Calligrapher algorithm can be efficiently used to refactor difficult-to-synthesize DNA. Genome Calligrapher is broadly applicable to recode biosynthetic pathways, DNA sequences, and whole bacterial genomes, thus offering new opportunities to use synthetic biology tools to explore the functionality of microbial diversity. The Genome Calligrapher web tool can be accessed at https://christenlab.ethz.ch/GenomeCalligrapher .
Code Analysis and Refactoring with Clang Tools, Version 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, Timothy M.
2016-12-23
Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.
A plug-in to Eclipse for VHDL source codes: functionalities
NASA Astrophysics Data System (ADS)
Niton, B.; Poźniak, K. T.; Romaniuk, R. S.
The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.
Handbook for Implementing Agile in Department of Defense Information Technology Acquisition
2010-12-15
Wire-frame Mockup of iTunes Cover Flow Feature (source: http://www.balsamiq.com/products/mockups/examples#mytunez...programming. The JOPES customer was included early in the development process in order to understand requirements management (story cards ), observe...transition by teaching the new members Agile processes, such as story card development, refactoring, and pair programming. Additionally, the team worked to
Rational synthetic pathway refactoring of natural products biosynthesis in actinobacteria.
Tan, Gao-Yi; Liu, Tiangang
2017-01-01
Natural products (NPs) and their derivatives are widely used as frontline treatments for many diseases. Actinobacteria spp. are used to produce most of NP antibiotics and have also been intensively investigated for NP production, derivatization, and discovery. However, due to the complicated transcriptional and metabolic regulation of NP biosynthesis in Actinobacteria, especially in the cases of genome mining and heterologous expression, it is often difficult to rationally and systematically engineer synthetic pathways to maximize biosynthetic efficiency. With the emergence of new tools and methods in metabolic engineering, the synthetic pathways of many chemicals, such as fatty acids and biofuels, in model organisms (e.g. Escherichia coli ), have been refactored to realize precise and flexible control of production. These studies also offer a promising approach for synthetic pathway refactoring in Actinobacteria. In this review, the great potential of Actinobacteria as a microbial cell factory for biosynthesis of NPs is discussed. To this end, recent progress in metabolic engineering of NP synthetic pathways in Actinobacteria are summarized and strategies and perspectives to rationally and systematically refactor synthetic pathways in Actinobacteria are highlighted. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
NASA Astrophysics Data System (ADS)
Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R.
2014-07-01
The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.
Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.
Burek, Patryk; Loebe, Frank; Herre, Heinrich
2017-10-04
Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.
Del Carratore, Francesco; Jankevics, Andris; Eisinga, Rob; Heskes, Tom; Hong, Fangxin; Breitling, Rainer
2017-09-01
The Rank Product (RP) is a statistical technique widely used to detect differentially expressed features in molecular profiling experiments such as transcriptomics, metabolomics and proteomics studies. An implementation of the RP and the closely related Rank Sum (RS) statistics has been available in the RankProd Bioconductor package for several years. However, several recent advances in the understanding of the statistical foundations of the method have made a complete refactoring of the existing package desirable. We implemented a completely refactored version of the RankProd package, which provides a more principled implementation of the statistics for unpaired datasets. Moreover, the permutation-based P -value estimation methods have been replaced by exact methods, providing faster and more accurate results. RankProd 2.0 is available at Bioconductor ( https://www.bioconductor.org/packages/devel/bioc/html/RankProd.html ) and as part of the mzMatch pipeline ( http://www.mzmatch.sourceforge.net ). rainer.breitling@manchester.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Discovery of a Phosphonoacetic Acid Derived Natural Product by Pathway Refactoring.
Freestone, Todd S; Ju, Kou-San; Wang, Bin; Zhao, Huimin
2017-02-17
The activation of silent natural product gene clusters is a synthetic biology problem of great interest. As the rate at which gene clusters are identified outpaces the discovery rate of new molecules, this unknown chemical space is rapidly growing, as too are the rewards for developing technologies to exploit it. One class of natural products that has been underrepresented is phosphonic acids, which have important medical and agricultural uses. Hundreds of phosphonic acid biosynthetic gene clusters have been identified encoding for unknown molecules. Although methods exist to elicit secondary metabolite gene clusters in native hosts, they require the strain to be amenable to genetic manipulation. One method to circumvent this is pathway refactoring, which we implemented in an effort to discover new phosphonic acids from a gene cluster from Streptomyces sp. strain NRRL F-525. By reengineering this cluster for expression in the production host Streptomyces lividans, utility of refactoring is demonstrated with the isolation of a novel phosphonic acid, O-phosphonoacetic acid serine, and the characterization of its biosynthesis. In addition, a new biosynthetic branch point is identified with a phosphonoacetaldehyde dehydrogenase, which was used to identify additional phosphonic acid gene clusters that share phosphonoacetic acid as an intermediate.
Calculation of the transverse parton distribution functions at next-to-next-to-leading order
NASA Astrophysics Data System (ADS)
Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin
2014-06-01
We describe the perturbative calculation of the transverse parton distribution functions in all partonic channels up to next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate the cancellation of light-cone divergences and show that universal process-independent transverse parton distribution functions can be obtained through a refactorization. Our results serve as the first explicit higher-order calculation of these functions starting from first principles, and can be used to perform next-to-next-to-next-to-leading logarithmic q T resummation for a large class of processes at hadron colliders.
Morgado, Gaspar; Gerngross, Daniel; Roberts, Tania M; Panke, Sven
Cell-free biosynthesis in the form of in vitro multi-enzyme reaction networks or enzyme cascade reactions emerges as a promising tool to carry out complex catalysis in one-step, one-vessel settings. It combines the advantages of well-established in vitro biocatalysis with the power of multi-step in vivo pathways. Such cascades have been successfully applied to the synthesis of fine and bulk chemicals, monomers and complex polymers of chemical importance, and energy molecules from renewable resources as well as electricity. The scale of these initial attempts remains small, suggesting that more robust control of such systems and more efficient optimization are currently major bottlenecks. To this end, the very nature of enzyme cascade reactions as multi-membered systems requires novel approaches for implementation and optimization, some of which can be obtained from in vivo disciplines (such as pathway refactoring and DNA assembly), and some of which can be built on the unique, cell-free properties of cascade reactions (such as easy analytical access to all system intermediates to facilitate modeling).
GCS component development cycle
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti
2012-09-01
The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.
nmsBuilder: Freeware to create subject-specific musculoskeletal models for OpenSim.
Valente, Giordano; Crimi, Gianluigi; Vanella, Nicola; Schileo, Enrico; Taddei, Fulvia
2017-12-01
Musculoskeletal modeling and simulations of movement have been increasingly used in orthopedic and neurological scenarios, with increased attention to subject-specific applications. In general, musculoskeletal modeling applications have been facilitated by the development of dedicated software tools; however, subject-specific studies have been limited also by time-consuming modeling workflows and high skilled expertise required. In addition, no reference tools exist to standardize the process of musculoskeletal model creation and make it more efficient. Here we present a freely available software application, nmsBuilder 2.0, to create musculoskeletal models in the file format of OpenSim, a widely-used open-source platform for musculoskeletal modeling and simulation. nmsBuilder 2.0 is the result of a major refactoring of a previous implementation that moved a first step toward an efficient workflow for subject-specific model creation. nmsBuilder includes a graphical user interface that provides access to all functionalities, based on a framework for computer-aided medicine written in C++. The operations implemented can be used in a workflow to create OpenSim musculoskeletal models from 3D surfaces. A first step includes data processing to create supporting objects necessary to create models, e.g. surfaces, anatomical landmarks, reference systems; and a second step includes the creation of OpenSim objects, e.g. bodies, joints, muscles, and the corresponding model. We present a case study using nmsBuilder 2.0: the creation of an MRI-based musculoskeletal model of the lower limb. The model included four rigid bodies, five degrees of freedom and 43 musculotendon actuators, and was created from 3D surfaces of the segmented images of a healthy subject through the modeling workflow implemented in the software application. We have presented nmsBuilder 2.0 for the creation of musculoskeletal OpenSim models from image-based data, and made it freely available via nmsbuilder.org. This application provides an efficient workflow for model creation and helps standardize the process. We hope this would help promote personalized applications in musculoskeletal biomechanics, including larger sample size studies, and might also represent a basis for future developments for specific applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-03-31
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.
NASA Astrophysics Data System (ADS)
Mazurov, Alexander; Couturier, Ben; Popov, Dmitry; Farley, Nathanael
2017-10-01
Any time you modify an implementation within a program, change compiler version or operating system, you should also do regression testing. You can do regression testing by rerunning existing tests against the changes to determine whether this breaks anything that worked prior to the change and by writing new tests where necessary. At LHCb we have a huge codebase which is maintained by many people and can be run within different setups. Such situations lead to the crucial necessity to guide refactoring with a central profiling system that helps to run tests and find the impact of changes. In our work we present a software architecture and tools for running a profiling system. This system is responsible for systematically running regression tests, collecting and comparing results of these tests so changes between different setups can be observed and reported. The main feature of our solution is that it is based on a microservices architecture. Microservices break a large project into loosely coupled modules, which communicate with each other through simple APIs. Such modular architectural style helps us to avoid general pitfalls of monolithic architectures such as hard to understand a codebase as well as maintaining a large codebase and ineffective scalability. Our solution also allows to escape a complexity of microservices deployment process by using software containers and services management tools. Containers and service managers let us quickly deploy linked modules in development, production or in any other environments. Most of the developed modules are generic which means that the proposed architecture and tools can be used not only in LHCb but adopted for other experiments and companies.
Thomas-Vaslin, Véronique; Six, Adrien; Ganascia, Jean-Gabriel; Bersini, Hugues
2013-01-01
Dynamic modeling of lymphocyte behavior has primarily been based on populations based differential equations or on cellular agents moving in space and interacting each other. The final steps of this modeling effort are expressed in a code written in a programing language. On account of the complete lack of standardization of the different steps to proceed, we have to deplore poor communication and sharing between experimentalists, theoreticians and programmers. The adoption of diagrammatic visual computer language should however greatly help the immunologists to better communicate, to more easily identify the models similarities and facilitate the reuse and extension of existing software models. Since immunologists often conceptualize the dynamical evolution of immune systems in terms of “state-transitions” of biological objects, we promote the use of unified modeling language (UML) state-transition diagram. To demonstrate the feasibility of this approach, we present a UML refactoring of two published models on thymocyte differentiation. Originally built with different modeling strategies, a mathematical ordinary differential equation-based model and a cellular automata model, the two models are now in the same visual formalism and can be compared. PMID:24101919
GeoSciML v3.0 - a significant upgrade of the CGI-IUGS geoscience data model
NASA Astrophysics Data System (ADS)
Raymond, O.; Duclaux, G.; Boisvert, E.; Cipolloni, C.; Cox, S.; Laxton, J.; Letourneau, F.; Richard, S.; Ritchie, A.; Sen, M.; Serrano, J.-J.; Simons, B.; Vuollo, J.
2012-04-01
GeoSciML version 3.0 (http://www.geosciml.org), released in late 2011, is the latest version of the CGI-IUGS* Interoperability Working Group geoscience data interchange standard. The new version is a significant upgrade and refactoring of GeoSciML v2 which was released in 2008. GeoSciML v3 has already been adopted by several major international interoperability initiatives, including OneGeology, the EU INSPIRE program, and the US Geoscience Information Network, as their standard data exchange format for geoscience data. GeoSciML v3 makes use of recently upgraded versions of several Open Geospatial Consortium (OGC) and ISO data transfer standards, including GML v3.2, SWE Common v2.0, and Observations and Measurements v2 (ISO 19156). The GeoSciML v3 data model has been refactored from a single large application schema with many packages, into a number of smaller, but related, application schema modules with individual namespaces. This refactoring allows the use and future development of modules of GeoSciML (eg; GeologicUnit, GeologicStructure, GeologicAge, Borehole) in smaller, more manageable units. As a result of this refactoring and the integration with new OGC and ISO standards, GeoSciML v3 is not backwardly compatible with previous GeoSciML versions. The scope of GeoSciML has been extended in version 3.0 to include new models for geomorphological data (a Geomorphology application schema), and for geological specimens, geochronological interpretations, and metadata for geochemical and geochronological analyses (a LaboratoryAnalysis-Specimen application schema). In addition, there is better support for borehole data, and the PhysicalProperties model now supports a wider range of petrophysical measurements. The previously used CGI_Value data type has been superseded in favour of externally governed data types provided by OGC's SWE Common v2 and GML v3.2 data standards. The GeoSciML v3 release includes worked examples of best practice in delivering geochemical analytical data using the Observations and Measurements (ISO19156) and SWE Common v2 models. The GeoSciML v3 data model does not include vocabularies to support the data model. However, it does provide a standard pattern to reference controlled vocabulary concepts using HTTP-URIs. The international GeoSciML community has developed distributed RDF-based geoscience vocabularies that can be accessed by GeoSciML web services using the standard pattern recommended in GeoSciML v3. GeoSciML v3 is the first version of GeoSciML that will be accompanied by web service validation tools using Schematron rules. For example, these validation tools may check for compliance of a web service to a particular profile of GeoSciML, or for logical consistency of data content that cannot be enforced by the application schemas. This validation process will support accreditation of GeoSciML services and a higher degree of semantic interoperability. * International Union of Geological Sciences Commission for Management and Application of Geoscience Information (CGI-IUGS)
OpenSeesPy: Python library for the OpenSees finite element framework
NASA Astrophysics Data System (ADS)
Zhu, Minjie; McKenna, Frank; Scott, Michael H.
2018-01-01
OpenSees, an open source finite element software framework, has been used broadly in the earthquake engineering community for simulating the seismic response of structural and geotechnical systems. The framework allows users to perform finite element analysis with a scripting language and for developers to create both serial and parallel finite element computer applications as interpreters. For the last 15 years, Tcl has been the primary scripting language to which the model building and analysis modules of OpenSees are linked. To provide users with different scripting language options, particularly Python, the OpenSees interpreter interface was refactored to provide multi-interpreter capabilities. This refactoring, resulting in the creation of OpenSeesPy as a Python module, is accomplished through an abstract interface for interpreter calls with concrete implementations for different scripting languages. Through this approach, users are able to develop applications that utilize the unique features of several scripting languages while taking advantage of advanced finite element analysis models and algorithms.
Augustin, Megan M.; Ruzicka, Dan R.; Shukla, Ashutosh K.; Augustin, Jörg M.; Starks, Courtney M.; O’Neil-Johnson, Mark; McKain, Michael R.; Evans, Bradley S.; Barrett, Matt D.; Smithson, Ann; Wong, Gane Ka-Shu; Deyholos, Michael K.; Edger, Patrick P.; Pires, J. Chris; Leebens-Mack, James H.; Mann, David A.; Kutchan, Toni M.
2015-01-01
Summary Steroid alkaloids have been shown to elicit a wide range of pharmacological effects that include anticancer and antifungal activities. Understanding the biosynthesis of these molecules is essential to bioengineering for sustainable production. Herein, we investigate the biosynthetic pathway to cyclopamine, a steroid alkaloid that shows promising antineoplastic activities. Supply of cyclopamine is limited, as the current source is solely derived from wild collection of the plant Veratrum californicum. To elucidate the early stages of the pathway to cyclopamine, we interrogated a V. californicum RNA-seq dataset using the cyclopamine accumulation profile as a predefined model for gene expression with the pattern-matching algorithm Haystack. Refactoring candidate genes in Sf9 insect cells led to discovery of four enzymes that catalyze the first six steps in steroid alkaloid biosynthesis to produce verazine, a predicted precursor to cyclopamine. Three of the enzymes are cytochromes P450 while the fourth is a γ-aminobutyrate transaminase; together they produce verazine from cholesterol. PMID:25939370
NASA Astrophysics Data System (ADS)
Huang, T.; Alarcon, C.; Quach, N. T.
2014-12-01
Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.
Refactoring a CS0 Course for Engineering Students to Use Active Learning
ERIC Educational Resources Information Center
Lokkila, Erno; Kaila, Erkki; Lindén, Rolf; Laakso, Mikko-Jussi; Sutinen, Erkki
2017-01-01
Purpose: The purpose of this paper was to determine whether applying e-learning material to a course leads to consistently improved student performance. Design/methodology/approach: This paper analyzes grade data from seven instances of the course. The first three instances were performed traditionally. After an intervention, in the form of…
Yu, Lingjun; Su, Wei; Fey, Paul D; Liu, Fengquan; Du, Liangcheng
2018-01-19
The cyclic lipodepsipeptides WAP-8294A are antibiotics with potent activity against methicillin-resistant Staphylococcus aureus (MRSA). One member of this family, WAP-8294A2 (Lotilibcin), was in clinical trials due to its high activity and distinct chemistry. However, WAP-8294A compounds are produced in a very low yield by Lysobacter and only under very stringent conditions. Improving WAP-8294A yield has become very critical for research and application of these anti-MRSA compounds. Here, we report a strategy to increase WAP-8294A production. We first used the CRISPR/dCas9 system to increase the expression of five cotranscribed genes (orf1-5) in the WAP gene cluster, by fusing the omega subunit of RNA polymerase with dCas9 that targets the operon's promoter region. This led to the transcription of the genes increased by 5-48 folds in strain dCas9-ω3. We then refactored four putative self-protection genes (orf6, orf7, orf9 and orf10) by reorganizing them into an operon under the control of a strong Lysobacter promoter, P HSAF . The refactored operon was introduced into strain dCas9-ω3, and the transcription of the self-protection genes increased by 20-60 folds in the resultant engineered strains. The yield of the three main WAP-8294A compounds, WAP-8294A1, WAP-8294A2, and WAP-8294A4, increased by 6, 4, and 9 folds, respectively, in the engineered strains. The data also showed that the yield increase of WAP-8294A compounds was mainly due to the increase of the extracellular distribution. WAP-8294A2 exhibited potent (MIC 0.2-0.8 μg/mL) and specific activity against S. aureus among a battery of clinically relevant Gram-positive pathogens (54 isolates).
RSAT 2018: regulatory sequence analysis tools 20th anniversary.
Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane
2018-05-02
RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.
A posteriori operation detection in evolving software models
Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti
2013-01-01
As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366
PyGirl: Generating Whole-System VMs from High-Level Prototypes Using PyPy
NASA Astrophysics Data System (ADS)
Bruni, Camillo; Verwaest, Toon
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.
EMAN2: an extensible image processing suite for electron microscopy.
Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J
2007-01-01
EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1.
Introducing Object-Oriented Concepts into GSI
NASA Technical Reports Server (NTRS)
Guo, Jing; Todling, Ricardo
2017-01-01
Enhancements are now being made to the Gridpoint Statistical Interpolation (GSI) data assimilation system to expand its capabilities. This effort opens the way for broadening the scope of GSI's applications by using some standard object-oriented features in Fortran, and represents a starting point for the so-called GSI refactoring, as a part of the Joint Effort for Data-assimilationI ntegration (JEDI) project of JCSDA.
Refactorizing NRQCD short-distance coefficients in exclusive quarkonium production
NASA Astrophysics Data System (ADS)
Jia, Yu; Yang, Deshan
2009-06-01
In a typical exclusive quarkonium production process, when the center-of-mass energy, √{s}, is much greater than the heavy quark mass m, large kinematic logarithms of s/m will unavoidably arise at each order of perturbative expansion in the short-distance coefficients of the nonrelativistic QCD (NRQCD) factorization formalism, which may potentially harm the perturbative expansion. This symptom reflects that the hard regime in NRQCD factorization is too coarse and should be further factorized. We suggest that this regime can be further separated into "hard" and "collinear" degrees of freedom, so that the familiar light-cone approach can be employed to reproduce the NRQCD matching coefficients at the zeroth order of m/s and order by order in α. Taking two simple processes, exclusive η+γ production in ee annihilation and Higgs boson radiative decay into ϒ, as examples, we illustrate how the leading logarithms of s/m in the NRQCD matching coefficients are identified and summed to all orders in α with the aid of Brodsky-Lepage evolution equation.
2015-04-29
in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Richard; Voter, Arthur; Uberuaga, Bla
2017-10-23
The SpecTAD software represents a refactoring of the Temperature Accelerated Dynamics (TAD2) code authored by Arthur F. Voter and Blas P. Uberuaga (LA-CC-02-05). SpecTAD extends the capabilities of TAD2, by providing algorithms for both temporal and spatial parallelism. The novel algorithms for temporal parallelism include both speculation and replication based techniques. SpecTAD also offers the optional capability to dynamically link to the open-source LAMMPS package.
NASA Astrophysics Data System (ADS)
Mirvis, E.; Iredell, M.
2015-12-01
The operational (OPS) NOAA National Centers for Environmental Prediction (NCEP) suite, traditionally, consist of a large set of multi- scale HPC models, workflows, scripts, tools and utilities, which are very much depending on the variety of the additional components. Namely, this suite utilizes a unique collection of the in-house developed 20+ shared libraries (NCEPLIBS), certain versions of the 3-rd party libraries (like netcdf, HDF, ESMF, jasper, xml etc.), HPC workflow tool within dedicated (sometimes even vendors' customized) HPC system homogeneous environment. This domain and site specific, accompanied with NCEP's product- driven large scale real-time data operations complicates NCEP collaborative development tremendously by reducing chances to replicate this OPS environment anywhere else. The NOAA/NCEP's Environmental Modeling Center (EMC) missions to develop and improve numerical weather, climate, hydrological and ocean prediction through the partnership with the research community. Realizing said difficulties, lately, EMC has been taken an innovative approach to improve flexibility of the HPC environment by building the elements and a foundation for NCEP OPS functionally equivalent environment (FEE), which can be used to ease the external interface constructs as well. Aiming to reduce turnaround time of the community code enhancements via Research-to-Operations (R2O) cycle, EMC developed and deployed several project sub-set standards that already paved the road to NCEP OPS implementation standards. In this topic we will discuss the EMC FEE for O2R requirements and approaches in collaborative standardization, including NCEPLIBS FEE and models code version control paired with the models' derived customized HPC modules and FEE footprints. We will share NCEP/EMC experience and potential in the refactoring of EMC development processes, legacy codes and in securing model source code quality standards by using combination of the Eclipse IDE, integrated with the reverse engineering tools/APIs. We will also inform on collaborative efforts in the restructuring of the NOAA Environmental Modeling System (NEMS) - the multi- model and coupling framework, and transitioning FEE verification methodology.
Implementation of collisions on GPU architecture in the Vorpal code
NASA Astrophysics Data System (ADS)
Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John
2017-10-01
The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.
NASA Astrophysics Data System (ADS)
Rueda, Antonio J.; Noguera, José M.; Luque, Adrián
2016-02-01
In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.
FLOWER IPv4/IPv6 Network Flow Summarization software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nickless, Bill; Curtis, Darren; Christy, Jason
FLOWER was written as a refactoring/reimplementation of the existing Flo software used by the Cooperative Protection Program (CPP) to provide network flow summaries for analysis by the Operational Analysis Center (OAC) and other US Department of Energy cyber security elements. FLOWER is designed and tested to operate at 10 gigabits/second, nearly 10 times faster than competing solutions. FLOWER output is optimized for importation into SQL databases for categorization and analysis. FLOWER is written in C++ using current best software engineering practices.
2008-06-01
project is not an isolated OSSD project. Instead, the NetBeans IDE which is the focus of development activities in the NetBeans.org project community...facilitate or constrain the intended usage of the NetBeans IDE. Figure 1 provides a rendering of some of the more visible OSSD projects that...as BioBeans and RefactorIT communities build tools on top of or extending the NetBeans platform or IDE. How do these organizations interact with
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
Lim, Hyun Gyu; Lim, Jae Hyung; Jung, Gyoo Yeol
2015-01-01
Refactoring microorganisms for efficient production of advanced biofuel such as n-butanol from a mixture of sugars in the cheap feedstock is a prerequisite to achieve economic feasibility in biorefinery. However, production of biofuel from inedible and cheap feedstock is highly challenging due to the slower utilization of biomass-driven sugars, arising from complex assimilation pathway, difficulties in amplification of biosynthetic pathways for heterologous metabolite, and redox imbalance caused by consuming intracellular reducing power to produce quite reduced biofuel. Even with these problems, the microorganisms should show robust production of biofuel to obtain industrial feasibility. Thus, refactoring microorganisms for efficient conversion is highly desirable in biofuel production. In this study, we engineered robust Escherichia coli to accomplish high production of n-butanol from galactose-glucose mixtures via the design of modular pathway, an efficient and systematic way, to reconstruct the entire metabolic pathway with many target genes. Three modular pathways designed using the predictable genetic elements were assembled for efficient galactose utilization, n-butanol production, and redox re-balancing to robustly produce n-butanol from a sugar mixture of galactose and glucose. Specifically, the engineered strain showed dramatically increased n-butanol production (3.3-fold increased to 6.2 g/L after 48-h fermentation) compared to the parental strain (1.9 g/L) in galactose-supplemented medium. Moreover, fermentation with mixtures of galactose and glucose at various ratios from 2:1 to 1:2 confirmed that our engineered strain was able to robustly produce n-butanol regardless of sugar composition with simultaneous utilization of galactose and glucose. Collectively, modular pathway engineering of metabolic network can be an effective approach in strain development for optimal biofuel production with cost-effective fermentable sugars. To the best of our knowledge, this study demonstrated the first and highest n-butanol production from galactose in E. coli. Moreover, robust production of n-butanol with sugar mixtures with variable composition would facilitate the economic feasibility of the microbial process using a mixture of sugars from cheap biomass in the near future.
Zheng, Hai-ming; Li, Guang-jie; Wu, Hao
2015-06-01
Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.
The GBS code for tokamak scrape-off layer simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halpern, F.D., E-mail: federico.halpern@epfl.ch; Ricci, P.; Jolliet, S.
2016-06-15
We describe a new version of GBS, a 3D global, flux-driven plasma turbulence code to simulate the turbulent dynamics in the tokamak scrape-off layer (SOL), superseding the code presented by Ricci et al. (2012) [14]. The present work is driven by the objective of studying SOL turbulent dynamics in medium size tokamaks and beyond with a high-fidelity physics model. We emphasize an intertwining framework of improved physics models and the computational improvements that allow them. The model extensions include neutral atom physics, finite ion temperature, the addition of a closed field line region, and a non-Boussinesq treatment of the polarizationmore » drift. GBS has been completely refactored with the introduction of a 3-D Cartesian communicator and a scalable parallel multigrid solver. We report dramatically enhanced parallel scalability, with the possibility of treating electromagnetic fluctuations very efficiently. The method of manufactured solutions as a verification process has been carried out for this new code version, demonstrating the correct implementation of the physical model.« less
Linked Data: Forming Partnerships at the Data Layer
NASA Astrophysics Data System (ADS)
Shepherd, A.; Chandler, C. L.; Arko, R. A.; Jones, M. B.; Hitzler, P.; Janowicz, K.; Krisnadhi, A.; Schildhauer, M.; Fils, D.; Narock, T.; Groman, R. C.; O'Brien, M.; Patton, E. W.; Kinkade, D.; Rauch, S.
2015-12-01
The challenges presented by big data are straining data management software architectures of the past. For smaller existing data facilities, the technical refactoring of software layers become costly to scale across the big data landscape. In response to these challenges, data facilities will need partnerships with external entities for improved solutions to perform tasks such as data cataloging, discovery and reuse, and data integration and processing with provenance. At its surface, the concept of linked open data suggests an uncalculated altruism. Yet, in his concept of five star open data, Tim Berners-Lee explains the strategic costs and benefits of deploying linked open data from the perspective of its consumer and producer - a data partnership. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) addresses some of the emerging needs of its research community by partnering with groups doing complementary work and linking their respective data layers using linked open data principles. Examples will show how these links, explicit manifestations of partnerships, reduce technical debt and provide a swift flexibility for future considerations.
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly. Reviewed by Mark McInerney ESDIS Deputy Project Manager.
NASA Astrophysics Data System (ADS)
Xue, L.; Firl, G.; Zhang, M.; Jimenez, P. A.; Gill, D.; Carson, L.; Bernardet, L.; Brown, T.; Dudhia, J.; Nance, L. B.; Stark, D. R.
2017-12-01
The Global Model Test Bed (GMTB) has been established to support the evolution of atmospheric physical parameterizations in NCEP global modeling applications. To accelerate the transition to the Next Generation Global Prediction System (NGGPS), a collaborative model development framework known as the Common Community Physics Package (CCPP) is created within the GMTB to facilitate engagement from the broad community on physics experimentation and development. A key component to this Research to Operation (R2O) software framework is the Interoperable Physics Driver (IPD) that hooks the physics parameterizations from one end to the dynamical cores on the other end with minimum implementation effort. To initiate the CCPP, scientists and engineers from the GMTB separated and refactored the GFS physics. This exercise demonstrated the process of creating IPD-compliant code and can serve as an example for other physics schemes to do the same and be considered for inclusion into the CCPP. Further benefits to this process include run-time physics suite configuration and considerably reduced effort for testing modifications to physics suites through GMTB's physics test harness. The implementation will be described and the preliminary results will be presented at the conference.
Steps towards the synthetic biology of polyketide biosynthesis.
Cummings, Matthew; Breitling, Rainer; Takano, Eriko
2014-02-01
Nature is providing a bountiful pool of valuable secondary metabolites, many of which possess therapeutic properties. However, the discovery of new bioactive secondary metabolites is slowing down, at a time when the rise of multidrug-resistant pathogens and the realization of acute and long-term side effects of widely used drugs lead to an urgent need for new therapeutic agents. Approaches such as synthetic biology are promising to deliver a much-needed boost to secondary metabolite drug development through plug-and-play optimized hosts and refactoring novel or cryptic bacterial gene clusters. Here, we discuss this prospect focusing on one comprehensively studied class of clinically relevant bioactive molecules, the polyketides. Extensive efforts towards optimization and derivatization of compounds via combinatorial biosynthesis and classical engineering have elucidated the modularity, flexibility and promiscuity of polyketide biosynthetic enzymes. Hence, a synthetic biology approach can build upon a solid basis of guidelines and principles, while providing a new perspective towards the discovery and generation of novel and new-to-nature compounds. We discuss the lessons learned from the classical engineering of polyketide synthases and indicate their importance when attempting to engineer biosynthetic pathways using synthetic biology approaches for the introduction of novelty and overexpression of products in a controllable manner. © 2013 The Authors FEMS Microbiology Letters published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Daniel; Wolf, Felix
2016-02-17
The PRIMA-X (Performance Retargeting of Instrumentation, Measurement, and Analysis Technologies for Exascale Computing) project is the successor of the DOE PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing) project, which addressed the challenge of creating a core measurement infrastructure that would serve as a common platform for both integrating leading parallel performance systems (notably TAU and Scalasca) and developing next-generation scalable performance tools. The PRIMA-X project shifts the focus away from refactorization of robust performance tools towards a re-targeting of the parallel performance measurement and analysis architecture for extreme scales. The massive concurrency, asynchronous execution dynamics,more » hardware heterogeneity, and multi-objective prerequisites (performance, power, resilience) that identify exascale systems introduce fundamental constraints on the ability to carry forward existing performance methodologies. In particular, there must be a deemphasis of per-thread observation techniques to significantly reduce the otherwise unsustainable flood of redundant performance data. Instead, it will be necessary to assimilate multi-level resource observations into macroscopic performance views, from which resilient performance metrics can be attributed to the computational features of the application. This requires a scalable framework for node-level and system-wide monitoring and runtime analyses of dynamic performance information. Also, the interest in optimizing parallelism parameters with respect to performance and energy drives the integration of tool capabilities in the exascale environment further. Initially, PRIMA-X was a collaborative project between the University of Oregon (lead institution) and the German Research School for Simulation Sciences (GRS). Because Prof. Wolf, the PI at GRS, accepted a position as full professor at Technische Universität Darmstadt (TU Darmstadt) starting February 1st, 2015, the project ended at GRS on January 31st, 2015. This report reflects the work accomplished at GRS until then. The work of GRS is expected to be continued at TU Darmstadt. The first main accomplishment of GRS is the design of different thread-level aggregation techniques. We created a prototype capable of aggregating the thread-level information in performance profiles using these techniques. The next step will be the integration of the most promising techniques into the Score-P measurement system and their evaluation. The second main accomplishment is a substantial increase of Score-P’s scalability, achieved by improving the design of the system-tree representation in Score-P’s profile format. We developed a new representation and a distributed algorithm to create the scalable system tree representation. Finally, we developed a lightweight approach to MPI wait-state profiling. Former algorithms either needed piggy-backing, which can cause significant runtime overhead, or tracing, which comes with its own set of scaling challenges. Our approach works with local data only and, thus, is scalable and has very little overhead.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.
Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.
Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, S.; Havloej, F.; Lago, D.
2013-07-01
The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D.; Wolf, Felix G.
2014-01-31
The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D.; Wolf, Felix G.
2014-01-31
The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.« less
Transverse vetoes with rapidity cutoff in SCET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis
We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less
Transverse vetoes with rapidity cutoff in SCET
Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis; ...
2017-12-11
We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less
2013-03-01
alerts 0.00011 3.26E-06 alternative 0.000161 0.000426 amp 5.25E-05 0.003127 amplifier 0.001501 0.000277 angular 0.000103 3.26E-06 anticipate 0.000755...0.000217 0.00056 amp 4.07E-05 0.004884 amplifier 0.002158 0.00043 angular 0.000109 4.48E-06 anticipation 0.000136 0.000453 aperture 0.000624...0.000215 instructed 0.00057 4.93E-05 java 0.000258 4.48E-05 refactoring 0.00019 2.69E-05 strike 0.000271 5.83E-05 touches 1.36E-05 9.86E-05
Jet shapes in dijet events at the LHC in SCET
NASA Astrophysics Data System (ADS)
Hornig, Andrew; Makris, Yiannis; Mehen, Thomas
2016-04-01
We consider the class of jet shapes known as angularities in dijet production at hadron colliders. These angularities are modified from the original definitions in e + e - collisions to be boost invariant along the beam axis. These shapes apply to the constituents of jets defined with respect to either k T -type (anti- k T , C/ A, and k T ) algorithms and cone-type algorithms. We present an SCET factorization formula and calculate the ingredients needed to achieve next-to-leading-log (NLL) accuracy in kinematic regions where non-global logarithms are not large. The factorization formula involves previously unstudied "unmeasured beam functions," which are present for finite rapidity cuts around the beams. We derive relations between the jet functions and the shape-dependent part of the soft function that appear in the factorized cross section and those previously calculated for e + e - collisions, and present the calculation of the non-trivial, color-connected part of the soft-function to O({α}_s) . This latter part of the soft function is universal in the sense that it applies to any experimental setup with an out-of-jet p T veto and rapidity cuts together with two identified jets and it is independent of the choice of jet (sub-)structure measurement. In addition, we implement the recently introduced soft-collinear refactorization to resum logarithms of the jet size, valid in the region of non-enhanced non-global logarithm effects. While our results are valid for all 2 → 2 channels, we compute explicitly for the qq' → qq' channel the color-flow matrices and plot the NLL resummed differential dijet cross section as an explicit example, which shows that the normalization and scale uncertainty is reduced when the soft function is refactorized. For this channel, we also plot the jet size R dependence, the p T cut dependence, and the dependence on the angularity parameter a.
Jet shapes in dijet events at the LHC in SCET
Hornig, Andrew; Makris, Yiannis; Mehen, Thomas
2016-04-15
Here, we consider the class of jet shapes known as angularities in dijet production at hadron colliders. These angularities are modified from the original definitions in e + e- collisions to be boost invariant along the beam axis. These shapes apply to the constituents of jets defined with respect to either k T-type (anti-k T, C/A, and k T) algorithms and cone-type algorithms. We present an SCET factorization formula and calculate the ingredients needed to achieve next-to-leading-log (NLL) accuracy in kinematic regions where non-global logarithms are not large. The factorization formula involves previously unstudied “unmeasured beam functions,” which are present for finite rapidity cuts around the beams. We derive relations between the jet functions and the shape-dependent part of the soft function that appear in the factorized cross section and those previously calculated for e +e - collisions, and present the calculation of the non-trivial, color-connected part of the soft-function to O(αs) . This latter part of the soft function is universal in the sense that it applies to any experimental setup with an out-of-jet p T veto and rapidity cuts together with two identified jets and it is independent of the choice of jet (sub-)structure measurement. In addition, we implement the recently introduced soft-collinear refactorization to resum logarithms of the jet size, valid in the region of non-enhanced non-global logarithm effects. While our results are valid for all 2 → 2 channels, we compute explicitly for the qq' → qq' channel the color-flow matrices and plot the NLL resummed differential dijet cross section as an explicit example, which shows that the normalization and scale uncertainty is reduced when the soft function is refactorized. For this channel, we also plot the jet size R dependence, the pmore » $$cut\\atop{T}$$ dependence, and the dependence on the angularity parameter a.« less
List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
NASA Astrophysics Data System (ADS)
Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.
2014-03-01
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe
2010-03-31
GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less
IDCDACS: IDC's Distributed Application Control System
NASA Astrophysics Data System (ADS)
Ertl, Martin; Boresch, Alexander; Kianička, Ján; Sudakov, Alexander; Tomuta, Elena
2015-04-01
The Preparatory Commission for the CTBTO is an international organization based in Vienna, Austria. Its mission is to establish a global verification regime to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which bans all nuclear explosions. For this purpose time series data from a global network of seismic, hydro-acoustic and infrasound (SHI) sensors are transmitted to the International Data Centre (IDC) in Vienna in near-real-time, where it is processed to locate events that may be nuclear explosions. We newly designed the distributed application control system that glues together the various components of the automatic waveform data processing system at the IDC (IDCDACS). Our highly-scalable solution preserves the existing architecture of the IDC processing system that proved successful over many years of operational use, but replaces proprietary components with open-source solutions and custom developed software. Existing code was refactored and extended to obtain a reusable software framework that is flexibly adaptable to different types of processing workflows. Automatic data processing is organized in series of self-contained processing steps, each series being referred to as a processing pipeline. Pipelines process data by time intervals, i.e. the time-series data received from monitoring stations is organized in segments based on the time when the data was recorded. So-called data monitor applications queue the data for processing in each pipeline based on specific conditions, e.g. data availability, elapsed time or completion states of preceding processing pipelines. IDCDACS consists of a configurable number of distributed monitoring and controlling processes, a message broker and a relational database. All processes communicate through message queues hosted on the message broker. Persistent state information is stored in the database. A configurable processing controller instantiates and monitors all data processing applications. Due to decoupling by message queues the system is highly versatile and failure tolerant. The implementation utilizes the RabbitMQ open-source messaging platform that is based upon the Advanced Message Queuing Protocol (AMQP), an on-the-wire protocol (like HTML) and open industry standard. IDCDACS uses high availability capabilities provided by RabbitMQ and is equipped with failure recovery features to survive network and server outages. It is implemented in C and Python and is operated in a Linux environment at the IDC. Although IDCDACS was specifically designed for the existing IDC processing system its architecture is generic and reusable for different automatic processing workflows, e.g. similar to those described in (Olivieri et al. 2012, Kværna et al. 2012). Major advantages are its independence of the specific data processing applications used and the possibility to reconfigure IDCDACS for different types of processing, data and trigger logic. A possible future development would be to use the IDCDACS framework for different scientific domains, e.g. for processing of Earth observation satellite data extending the one-dimensional time-series intervals to spatio-temporal data cubes. REFERENCES Olivieri M., J. Clinton (2012) An almost fair comparison between Earthworm and SeisComp3, Seismological Research Letters, 83(4), 720-727. Kværna, T., S. J. Gibbons, D. B. Harris, D. A. Dodge (2012) Adapting pipeline architectures to track developing aftershock sequences and recurrent explosions, Proceedings of the 2012 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies, 776-785.
NASA Technical Reports Server (NTRS)
Swenson, Paul
2017-01-01
Satellite/Payload Ground Systems - Typically highly-customized to a specific mission's use cases - Utilize hundreds (or thousands!) of specialized point-to-point interfaces for data flows / file transfers Documentation and tracking of these complex interfaces requires extensive time to develop and extremely high staffing costs Implementation and testing of these interfaces are even more cost-prohibitive, and documentation often lags behind implementation resulting in inconsistencies down the road With expanding threat vectors, IT Security, Information Assurance and Operational Security have become key Ground System architecture drivers New Federal security-related directives are generated on a daily basis, imposing new requirements on current / existing ground systems - These mandated activities and data calls typically carry little or no additional funding for implementation As a result, Ground System Sustaining Engineering groups and Information Technology staff continually struggle to keep up with the rolling tide of security Advancing security concerns and shrinking budgets are pushing these large stove-piped ground systems to begin sharing resources - I.e. Operational / SysAdmin staff, IT security baselines, architecture decisions or even networks / hosting infrastructure Refactoring these existing ground systems into multi-mission assets proves extremely challenging due to what is typically very tight coupling between legacy components As a result, many "Multi-Mission" ops. environments end up simply sharing compute resources and networks due to the difficulty of refactoring into true multi-mission systems Utilizing continuous integration / rapid system deployment technologies in conjunction with an open architecture messaging approach allows System Engineers and Architects to worry less about the low-level details of interfaces between components and configuration of systems GMSEC messaging is inherently designed to support multi-mission requirements, and allows components to aggregate data across multiple homogeneous or heterogeneous satellites or payloads - The highly-successful Goddard Science and Planetary Operations Control Center (SPOCC) utilizes GMSEC as the hub for it's automation and situational awareness capability Shifts focus towards getting GS to a final configuration-managed baseline, as well as multi-mission / big-picture capabilities that help increase situational awareness, promote cross-mission sharing and establish enhanced fleet management capabilities across all levels of the enterprise.
MOOSE IPL Extensions (Control Logic)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Permann, Cody
In FY-2015, the development of MOOSE was driven by the needs of the NEAMS MOOSE-based applications, BISON, MARMOT, and RELAP-7. An emphasis was placed on the continued upkeep and improvement MOOSE in support of the product line integration goals. New unified documentation tools have been developed, several improvements to regression testing have been enforced and overall better software quality practices have been implemented. In addition the Multiapps and Transfers systems have seen significant refactoring and robustness improvements, as has the “Restart and Recover” system in support of Multiapp simulations. Finally, a completely new “Control Logic” system has been engineered tomore » replace the prototype system currently in use in the RELAP-7 code. The development of this system continues and is expected to handle existing needs as well as support future enhancements.« less
PB-AM: An open-source, fully analytical linear poisson-boltzmann solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felberg, Lisa E.; Brookes, David H.; Yap, Eng-Hui
2016-11-02
We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmannmore » Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.« less
NASA Astrophysics Data System (ADS)
Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie
2017-03-01
The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.
Retinal Connectomics: Towards Complete, Accurate Networks
Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott
2013-01-01
Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532
Scintillation-Hardened GPS Receiver
NASA Technical Reports Server (NTRS)
Stephens, Donald R.
2015-01-01
CommLargo, Inc., has developed a scintillation-hardened Global Positioning System (GPS) receiver that improves reliability for low-orbit missions and complies with NASA's Space Telecommunications Radio System (STRS) architecture standards. A software-defined radio (SDR) implementation allows a single hardware element to function as either a conventional radio or as a GPS receiver, providing backup and redundancy for platforms such as the International Space Station (ISS) and high-value remote sensing platforms. The innovation's flexible SDR implementation reduces cost, weight, and power requirements. Scintillation hardening improves mission reliability and variability. In Phase I, CommLargo refactored an open-source GPS software package with Kalman filter-based tracking loops to improve performance during scintillation and also demonstrated improved navigation during a geomagnetic storm. In Phase II, the company generated a new field-programmable gate array (FPGA)-based GPS waveform to demonstrate on NASA's Space Communication and Navigation (SCaN) test bed.
PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.
Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa
2017-06-05
We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Qi, Jianzhao; Liu, Jin; Wan, Dan; Cai, You-Sheng; Wang, Yinghu; Li, Shunying; Wu, Pan; Feng, Xuan; Qiu, Guofu; Yang, Sheng-Ping; Chen, Wenqing; Deng, Zixin
2015-09-01
Polyoxin and nikkomycin are naturally occurring peptidyl nucleoside antibiotics with potent antifungal bioactivity. Both exhibit similar structural features, having a nucleoside skeleton and one or two peptidyl moieties. Combining the refactoring of the polyoxin producer Streptomyces aureochromogenes with import of the hydroxypyridylhomothreonine pathway of nikkomycin allows the targeted production of three designer nucleoside antibiotics designated as nikkoxin E, F, and G. These structures were determined by NMR and/or high resolution mass spectrometry. Remarkably, the introduction of an extra copy of the nikS gene encoding an ATP-dependent ligase significantly enhanced the production of the designer antibiotics. Moreover, all three nikkoxins displayed improved bioactivity against several pathogenic fungi as compared with the naturally-occurring antibiotics. These data provide a feasible model for high efficiency generation of nucleoside antibiotics related to polyoxins and nikkomycins in a polyoxin cell factory via synthetic biology strategy. © 2015 Wiley Periodicals, Inc.
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware
Zheng, Da; Burns, Randal; Szalay, Alexander S.
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.
Zheng, Da; Burns, Randal; Szalay, Alexander S
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.
Development of a Cloud Resolving Model for Heterogeneous Supercomputers
NASA Astrophysics Data System (ADS)
Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.
2017-12-01
A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
The Virtual Environment for Reactor Applications (VERA): Design and architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, John A., E-mail: turnerja@ornl.gov; Clarno, Kevin; Sieger, Matt
VERA, the Virtual Environment for Reactor Applications, is the system of physics capabilities being developed and deployed by the Consortium for Advanced Simulation of Light Water Reactors (CASL). CASL was established for the modeling and simulation of commercial nuclear reactors. VERA consists of integrating and interfacing software together with a suite of physics components adapted and/or refactored to simulate relevant physical phenomena in a coupled manner. VERA also includes the software development environment and computational infrastructure needed for these components to be effectively used. We describe the architecture of VERA from both software and numerical perspectives, along with the goalsmore » and constraints that drove major design decisions, and their implications. We explain why VERA is an environment rather than a framework or toolkit, why these distinctions are relevant (particularly for coupled physics applications), and provide an overview of results that demonstrate the use of VERA tools for a variety of challenging applications within the nuclear industry.« less
SimVascular: An Open Source Pipeline for Cardiovascular Simulation.
Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C
2017-03-01
Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.
GPU accelerated particle visualization with Splotch
NASA Astrophysics Data System (ADS)
Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.
2014-07-01
Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.
Process, including PSA and membrane separation, for separating hydrogen from hydrocarbons
Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo
2001-01-01
An improved process for separating hydrogen from hydrocarbons. The process includes a pressure swing adsorption step, a compression/cooling step and a membrane separation step. The membrane step relies on achieving a methane/hydrogen selectivity of at least about 2.5 under the conditions of the process.
Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock
2017-01-01
Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.
2017-01-01
Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing. PMID:29023597
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
25 CFR 15.11 - What are the basic steps of the probate process?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What are the basic steps of the probate process? 15.11 Section 15.11 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR PROBATE PROBATE OF INDIAN... are the basic steps of the probate process? The basic steps of the probate process are: (a) We learn...
Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian
2017-10-10
The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.
48 CFR 15.202 - Advisory multi-step process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Advisory multi-step... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... participate in the acquisition. This process should not be used for multi-step acquisitions where it would...
NASA Astrophysics Data System (ADS)
Hegde, Ananda; Sharma, Sathyashankara
2018-05-01
Austempered Ductile Iron (ADI) is a revolutionary material with high strength and hardness combined with optimum ductility and toughness. The discovery of two step austempering process has lead to the superior combination of all the mechanical properties. However, because of the high strength and hardness of ADI, there is a concern regarding its machinability. In the present study, machinability of ADI produced using conventional and two step heat treatment processes is assessed using tool life and the surface roughness. Speed, feed and depth of cut are considered as the machining parameters in the dry turning operation. The machinability results along with the mechanical properties are compared for ADI produced using both conventional and two step austempering processes. The results have shown that two step austempering process has produced better toughness with good hardness and strength without sacrificing ductility. Addition of 0.64 wt% manganese did not cause any detrimental effect on the machinability of ADI, both in conventional and two step processes. Marginal improvement in tool life and surface roughness were observed in two step process compared to that with conventional process.
High-resolution RCMs as pioneers for future GCMs
NASA Astrophysics Data System (ADS)
Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.
2017-12-01
Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data sets, the assessment of regional-scale climate feedback processes, and the development of alternative output analysis methodologies.
Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A
2010-12-15
The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.
Flexible Early Warning Systems with Workflows and Decision Tables
NASA Astrophysics Data System (ADS)
Riedel, F.; Chaves, F.; Zeiner, H.
2012-04-01
An essential part of early warning systems and systems for crisis management are decision support systems that facilitate communication and collaboration. Often official policies specify how different organizations collaborate and what information is communicated to whom. For early warning systems it is crucial that information is exchanged dynamically in a timely manner and all participants get exactly the information they need to fulfil their role in the crisis management process. Information technology obviously lends itself to automate parts of the process. We have experienced however that in current operational systems the information logistics processes are hard-coded, even though they are subject to change. In addition, systems are tailored to the policies and requirements of a certain organization and changes can require major software refactoring. We seek to develop a system that can be deployed and adapted to multiple organizations with different dynamic runtime policies. A major requirement for such a system is that changes can be applied locally without affecting larger parts of the system. In addition to the flexibility regarding changes in policies and processes, the system needs to be able to evolve; when new information sources become available, it should be possible to integrate and use these in the decision process. In general, this kind of flexibility comes with a significant increase in complexity. This implies that only IT professionals can maintain a system that can be reconfigured and adapted; end-users are unable to utilise the provided flexibility. In the business world similar problems arise and previous work suggested using business process management systems (BPMS) or workflow management systems (WfMS) to guide and automate early warning processes or crisis management plans. However, the usability and flexibility of current WfMS are limited, because current notations and user interfaces are still not suitable for end-users, and workflows are usually only suited for rigid processes. We show how improvements can be achieved by using decision tables and rule-based adaptive workflows. Decision tables have been shown to be an intuitive tool that can be used by domain experts to express rule sets that can be interpreted automatically at runtime. Adaptive workflows use a rule-based approach to increase the flexibility of workflows by providing mechanisms to adapt workflows based on context changes, human intervention and availability of services. The combination of workflows, decision tables and rule-based adaption creates a framework that opens up new possibilities for flexible and adaptable workflows, especially, for use in early warning and crisis management systems.
HMMER web server: 2018 update.
Potter, Simon C; Luciani, Aurélien; Eddy, Sean R; Park, Youngmi; Lopez, Rodrigo; Finn, Robert D
2018-06-14
The HMMER webserver [http://www.ebi.ac.uk/Tools/hmmer] is a free-to-use service which provides fast searches against widely used sequence databases and profile hidden Markov model (HMM) libraries using the HMMER software suite (http://hmmer.org). The results of a sequence search may be summarized in a number of ways, allowing users to view and filter the significant hits by domain architecture or taxonomy. For large scale usage, we provide an application programmatic interface (API) which has been expanded in scope, such that all result presentations are available via both HTML and API. Furthermore, we have refactored our JavaScript visualization library to provide standalone components for different result representations. These consume the aforementioned API and can be integrated into third-party websites. The range of databases that can be searched against has been expanded, adding four sequence datasets (12 in total) and one profile HMM library (6 in total). To help users explore the biological context of their results, and to discover new data resources, search results are now supplemented with cross references to other EMBL-EBI databases.
The Virtual Environment for Reactor Applications (VERA): Design and architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, John A.; Clarno, Kevin; Sieger, Matt
VERA, the Virtual Environment for Reactor Applications, is the system of physics capabilities being developed and deployed by the Consortium for Advanced Simulation of Light Water Reactors (CASL), the first DOE Hub, which was established in July 2010 for the modeling and simulation of commercial nuclear reactors. VERA consists of integrating and interfacing software together with a suite of physics components adapted and/or refactored to simulate relevant physical phenomena in a coupled manner. VERA also includes the software development environment and computational infrastructure needed for these components to be effectively used. We describe the architecture of VERA from both amore » software and a numerical perspective, along with the goals and constraints that drove the major design decisions and their implications. As a result, we explain why VERA is an environment rather than a framework or toolkit, why these distinctions are relevant (particularly for coupled physics applications), and provide an overview of results that demonstrate the application of VERA tools for a variety of challenging problems within the nuclear industry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Ai-Qun; Pratomo Juwono, Nina Kurniasih; Synthetic Biology Research Program, National University of Singapore, Singapore
Fatty acid derivatives, such as hydroxy fatty acids, fatty alcohols, fatty acid methyl/ethyl esters, and fatty alka(e)nes, have a wide range of industrial applications including plastics, lubricants, and fuels. Currently, these chemicals are obtained mainly through chemical synthesis, which is complex and costly, and their availability from natural biological sources is extremely limited. Metabolic engineering of microorganisms has provided a platform for effective production of these valuable biochemicals. Notably, synthetic biology-based metabolic engineering strategies have been extensively applied to refactor microorganisms for improved biochemical production. Here, we reviewed: (i) the current status of metabolic engineering of microbes that produce fattymore » acid-derived valuable chemicals, and (ii) the recent progress of synthetic biology approaches that assist metabolic engineering, such as mRNA secondary structure engineering, sensor-regulator system, regulatable expression system, ultrasensitive input/output control system, and computer science-based design of complex gene circuits. Furthermore, key challenges and strategies were discussed. Finally, we concluded that synthetic biology provides useful metabolic engineering strategies for economically viable production of fatty acid-derived valuable chemicals in engineered microbes.« less
The Virtual Environment for Reactor Applications (VERA): Design and architecture
Turner, John A.; Clarno, Kevin; Sieger, Matt; ...
2016-09-08
VERA, the Virtual Environment for Reactor Applications, is the system of physics capabilities being developed and deployed by the Consortium for Advanced Simulation of Light Water Reactors (CASL), the first DOE Hub, which was established in July 2010 for the modeling and simulation of commercial nuclear reactors. VERA consists of integrating and interfacing software together with a suite of physics components adapted and/or refactored to simulate relevant physical phenomena in a coupled manner. VERA also includes the software development environment and computational infrastructure needed for these components to be effectively used. We describe the architecture of VERA from both amore » software and a numerical perspective, along with the goals and constraints that drove the major design decisions and their implications. As a result, we explain why VERA is an environment rather than a framework or toolkit, why these distinctions are relevant (particularly for coupled physics applications), and provide an overview of results that demonstrate the application of VERA tools for a variety of challenging problems within the nuclear industry.« less
Production of Fatty Acid-Derived Valuable Chemicals in Synthetic Microbes
Yu, Ai-Qun; Pratomo Juwono, Nina Kurniasih; Leong, Susanna Su Jan; Chang, Matthew Wook
2014-01-01
Fatty acid derivatives, such as hydroxy fatty acids, fatty alcohols, fatty acid methyl/ethyl esters, and fatty alka(e)nes, have a wide range of industrial applications including plastics, lubricants, and fuels. Currently, these chemicals are obtained mainly through chemical synthesis, which is complex and costly, and their availability from natural biological sources is extremely limited. Metabolic engineering of microorganisms has provided a platform for effective production of these valuable biochemicals. Notably, synthetic biology-based metabolic engineering strategies have been extensively applied to refactor microorganisms for improved biochemical production. Here, we reviewed: (i) the current status of metabolic engineering of microbes that produce fatty acid-derived valuable chemicals, and (ii) the recent progress of synthetic biology approaches that assist metabolic engineering, such as mRNA secondary structure engineering, sensor-regulator system, regulatable expression system, ultrasensitive input/output control system, and computer science-based design of complex gene circuits. Furthermore, key challenges and strategies were discussed. Finally, we concluded that synthetic biology provides useful metabolic engineering strategies for economically viable production of fatty acid-derived valuable chemicals in engineered microbes. PMID:25566540
Traversing the fungal terpenome
Quin, Maureen B.; Flynn, Christopher M.; Schmidt-Dannert, Claudia
2014-01-01
Fungi (Ascomycota and Basidiomycota) are prolific producers of structurally diverse terpenoid compounds. Classes of terpenoids identified in fungi include the sesqui-, di- and triterpenoids. Biosynthetic pathways and enzymes to terpenoids from each of these classes have been described. These typically involve the scaffold generating terpene synthases and cyclases, and scaffold tailoring enzymes such as e.g. cytochrome P450 monoxygenases, NAD(P)+ and flavin dependent oxidoreductases, and various group transferases that generate the final bioactive structures. The biosynthesis of several sesquiterpenoid mycotoxins and bioactive diterpenoids has been well-studied in Ascomycota (e.g. filamentous fungi). Little is known about the terpenoid biosynthetic pathways in Basidiomycota (e.g. mushroom forming fungi), although they produce a huge diversity of terpenoid natural products. Specifically, many trans-humulyl cation derived sesquiterpenoid natural products with potent bioactivities have been isolated. Biosynthetic gene clusters responsible for the production of trans-humulyl cation derived protoilludanes, and other sesquiterpenoids, can be rapidly identified by genome sequencing and bioinformatic methods. Genome mining combined with heterologous biosynthetic pathway refactoring has the potential to facilitate discovery and production of pharmaceutically relevant fungal terpenoids. PMID:25171145
BiDiBlast: comparative genomics pipeline for the PC.
de Almeida, João M G C F
2010-06-01
Bi-directional BLAST is a simple approach to detect, annotate, and analyze candidate orthologous or paralogous sequences in a single go. This procedure is usually confined to the realm of customized Perl scripts, usually tuned for UNIX-like environments. Porting those scripts to other operating systems involves refactoring them, and also the installation of the Perl programming environment with the required libraries. To overcome these limitations, a data pipeline was implemented in Java. This application submits two batches of sequences to local versions of the NCBI BLAST tool, manages result lists, and refines both bi-directional and simple hits. GO Slim terms are attached to hits, several statistics are derived, and molecular evolution rates are estimated through PAML. The results are written to a set of delimited text tables intended for further analysis. The provided graphic user interface allows a friendly interaction with this application, which is documented and available to download at http://moodle.fct.unl.pt/course/view.php?id=2079 or https://sourceforge.net/projects/bidiblast/ under the GNU GPL license. Copyright 2010 Beijing Genomics Institute. Published by Elsevier Ltd. All rights reserved.
The High Field Path to Practical Fusion Energy
NASA Astrophysics Data System (ADS)
Mumgaard, Robert; Whyte, D.; Greenwald, M.; Hartwig, Z.; Brunner, D.; Sorbom, B.; Marmar, E.; Minervini, J.; Bonoli, P.; Irby, J.; Labombard, B.; Terry, J.; Vieira, R.; Wukitch, S.
2017-10-01
We propose a faster, lower cost development path for fusion energy enabled by high temperature superconductors, devices at high magnetic field, innovative technologies and modern approaches to technology development. Timeliness, scale, and economic-viability are the drivers for fusion energy to combat climate change and aid economic development. The opportunities provided by high-temperature superconductors, innovative engineering and physics, and new organizational structures identified over the last few years open new possibilities for realizing practical fusion energy that could meet mid-century de-carbonization needs. We discuss re-factoring the fusion energy development path with an emphasis on concrete risk retirement strategies utilizing a modular approach based on the high-field tokamak that leverages the broader tokamak physics understanding of confinement, stability, and operational limits. Elements of this plan include development of high-temperature superconductor magnets, simplified immersion blankets, advanced long-leg divertors, a compact divertor test tokamak, efficient current drive, modular construction, and demountable magnet joints. An R&D plan culminating in the construction of an integrated pilot plant and test facility modeled on the ARC concept is presented.
PanDA for ATLAS distributed computing in the next decade
NASA Astrophysics Data System (ADS)
Barreiro Megino, F. H.; De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarded in favor of a more automated and scalable model. Workloads are dynamically tailored for optimal usage of resources, with the brokerage taking network traffic and forecasts into account. Computing resources are partitioned based on dynamic knowledge of their status and characteristics. The pilot has been re-factored around a plugin structure for easier development and deployment. Bookkeeping is handled with both coarse and fine granularities for efficient utilization of pledged or opportunistic resources. An in-house security mechanism authenticates the pilot and data management services in off-grid environments such as volunteer computing and private local clusters. The PanDA monitor has been extensively optimized for performance and extended with analytics to provide aggregated summaries of the system as well as drill-down to operational details. There are as well many other challenges planned or recently implemented, and adoption by non-LHC experiments such as bioinformatics groups successfully running Paleomix (microbial genome and metagenomes) payload on supercomputers. In this paper we will focus on the new and planned features that are most important to the next decade of distributed computing workload management.
Processing of zero-derived words in English: an fMRI investigation.
Pliatsikas, Christos; Wheeldon, Linda; Lahiri, Aditi; Hansen, Peter C
2014-01-01
Derivational morphological processes allow us to create new words (e.g. punish (V) to noun (N) punishment) from base forms. The number of steps from the basic units to derived words often varies (e.g., nationality
NASA Astrophysics Data System (ADS)
Wang, Minhuan; Feng, Yulin; Bian, Jiming; Liu, Hongzhu; Shi, Yantao
2018-01-01
The mesoscopic perovskite solar cells (M-PSCs) were synthesized with MAPbI3 perovskite layers as light harvesters, which were grown with one-step and two-step solution process, respectively. A comparative study was performed through the quantitative correlation of resulting device performance and the crystalline quality of perovskite layers. Comparing with the one-step counterpart, a pronounced improvement in the steady-state power conversion efficiencies (PCEs) by 56.86% was achieved with two-step process, which was mainly resulted from the significant enhancement in fill factor (FF) from 48% to 77% without sacrificing the open circuit voltage (Voc) and short circuit current (Jsc). The enhanced FF was attributed to the reduced non-radiative recombination channels due to the better crystalline quality and larger grain size with the two-step processed perovskite layer. Moreover, the superiority of two-step over one-step process was demonstrated with rather good reproducibility.
Schulze, M; Kuster, C; Schäfer, J; Jung, M; Grossfeld, R
2018-03-01
The processing of ejaculates is a fundamental step for the fertilizing capacity of boar spermatozoa. The aim of the present study was to identify factors that affect quality of boar semen doses. The production process during 1 day of semen processing in 26 European boar studs was monitored. In each boar stud, nine to 19 randomly selected ejaculates from 372 Pietrain boars were analyzed for sperm motility, acrosome and plasma membrane integrity, mitochondrial activity and thermo-resistance (TRT). Each ejaculate was monitored for production time and temperature for each step in semen processing using the special programmed software SEQU (version 1.7, Minitüb, Tiefenbach, Germany). The dilution of ejaculates with a short-term extender was completed in one step in 10 AI centers (n = 135 ejaculates), in two steps in 11 AI centers (n = 158 ejaculates) and in three steps in five AI centers (n = 79 ejaculates). Results indicated there was a greater semen quality with one-step isothermal dilution compared with the multi-step dilution of AI semen doses (total motility TRT d7: 71.1 ± 19.2%, 64.6 ± 20.0%, 47.1 ± 27.1%; one-step compared with two-step compared with the three-step dilution; P < .05). There was a marked advantage when using the one-step isothermal dilution regarding time management, preservation suitability, stability and stress resistance. One-step dilution caused significant lower holding times of raw ejaculates and reduced the possible risk of making mistakes due to a lower number of processing steps. These results lead to refined recommendations for boar semen processing. Copyright © 2018 Elsevier B.V. All rights reserved.
A Coordinated Initialization Process for the Distributed Space Exploration Simulation
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David
2007-01-01
A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions
Two-Step Plasma Process for Cleaning Indium Bonding Bumps
NASA Technical Reports Server (NTRS)
Greer, Harold F.; Vasquez, Richard P.; Jones, Todd J.; Hoenk, Michael E.; Dickie, Matthew R.; Nikzad, Shouleh
2009-01-01
A two-step plasma process has been developed as a means of removing surface oxide layers from indium bumps used in flip-chip hybridization (bump bonding) of integrated circuits. The two-step plasma process makes it possible to remove surface indium oxide, without incurring the adverse effects of the acid etching process.
DMI's Baltic Sea Coastal operational forecasting system
NASA Astrophysics Data System (ADS)
Murawski, Jens; Berg, Per; Weismann Poulsen, Jacob
2017-04-01
Operational forecasting is challenged with bridging the gap between the large scales of the driving weather systems and the local, human scales of the model applications. The limit of what can be represented by local model has been continuously shifted to higher and higher spatial resolution, with the aim to better resolve the local dynamic and to make it possible to describe processes that could only be parameterised in older versions, with the ultimate goal to improve the quality of the forecast. Current hardware trends demand a str onger focus on the development of efficient, highly parallelised software and require a refactoring of the code with a solid focus on portable performance. The gained performance can be used for running high resolution model with a larger coverage. Together with the development of efficient two-way nesting routines, this has made it possible to approach the near-coastal zone with model applications that can run in a time effective way. Denmarks Meteorological Institute uses the HBM(1) ocean circulation model for applications that covers the entire Baltic Sea and North Sea with an integrated model set-up that spans the range of horizontal resolution from 1nm for the entire Baltic Sea to approx. 200m resolution in local fjords (Limfjord). For the next model generation, the high resolution set-ups are going to be extended and new high resolution domains in coastal zones are either implemented or tested for operational use. For the first time it will be possible to cover large stretches of the Baltic coastal zone with sufficiently high resolution to model the local hydrodynamic adequately. (1) HBM stands for HIROMB-BOOS-Model, whereas HIROMB stands for "High Resolution Model for the Baltic Sea" and BOOS stands for "Baltic Operational Oceanography System".
Limits of acceptable change and natural resources planning: when is LAC useful, when is it not?
David N. Cole; Stephen F. McCool
1997-01-01
There are ways to improve the LAC process and its implementational procedures. One significant procedural modification is the addition of a new step. This step â which becomes the first step in the process â involves more explicitly defining goals and desired conditions. For other steps in the process, clarifications of concept and terminology are advanced, as are...
Role of step stiffness and kinks in the relaxation of vicinal (001) with zigzag [110] steps
NASA Astrophysics Data System (ADS)
Mahjoub, B.; Hamouda, Ajmi BH.; Einstein, TL.
2017-08-01
We present a kinetic Monte Carlo study of the relaxation dynamics and steady state configurations of 〈110〉 steps on a vicinal (001) simple cubic surface. This system is interesting because 〈110〉 (fully kinked) steps have different elementary excitation energetics and favor step diffusion more than 〈100〉 (nominally straight) steps. In this study we show how this leads to different relaxation dynamics as well as to different steady state configurations, including that 2-bond breaking processes are rate determining for 〈110〉 steps in contrast to 3-bond breaking processes for 〈100〉-steps found in previous work [Surface Sci. 602, 3569 (2008)]. The analysis of the terrace-width distribution (TWD) shows a significant role of kink-generation-annihilation processes during the relaxation of steps: the kinetic of relaxation, toward the steady state, is much faster in the case of 〈110〉-zigzag steps, with a higher standard deviation of the TWD, in agreement with a decrease of step stiffness due to orientation. We conclude that smaller step stiffness leads inexorably to faster step dynamics towards the steady state. The step-edge anisotropy slows the relaxation of steps and increases the strength of step-step effective interactions.
Ten steps to successful software process improvement
NASA Technical Reports Server (NTRS)
Kandt, R. K.
2003-01-01
This paper identifies ten steps for managing change that address organizational and cultural issues. Four of these steps are critical, that if not done, will almost guarantee failure. This ten-step program emphasizes the alignment of business goals, change process goals, and the work performed by the employees of an organization.
Bistatic SAR: Signal Processing and Image Formation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, Daniel E.; Yocky, David A.
This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013more » on Kirtland Air Force Base, New Mexico.« less
Effective virus inactivation and removal by steps of Biotest Pharmaceuticals IGIV production process
Dichtelmüller, Herbert O.; Flechsig, Eckhard; Sananes, Frank; Kretschmar, Michael; Dougherty, Christopher J.
2012-01-01
The virus validation of three steps of Biotest Pharmaceuticals IGIV production process is described here. The steps validated are precipitation and removal of fraction III of the cold ethanol fractionation process, solvent/detergent treatment and 35 nm virus filtration. Virus validation was performed considering combined worst case conditions. By these validated steps sufficient virus inactivation/removal is achieved, resulting in a virus safe product. PMID:24371563
Mechanical and Metallurgical Evolution of Stainless Steel 321 in a Multi-step Forming Process
NASA Astrophysics Data System (ADS)
Anderson, M.; Bridier, F.; Gholipour, J.; Jahazi, M.; Wanjara, P.; Bocher, P.; Savoie, J.
2016-04-01
This paper examines the metallurgical evolution of AISI Stainless Steel 321 (SS 321) during multi-step forming, a process that involves cycles of deformation with intermediate heat treatment steps. The multi-step forming process was simulated by implementing interrupted uniaxial tensile testing experiments. Evolution of the mechanical properties as well as the microstructural features, such as twins and textures of the austenite and martensite phases, was studied as a function of the multi-step forming process. The characteristics of the Strain-Induced Martensite (SIM) were also documented for each deformation step and intermediate stress relief heat treatment. The results indicated that the intermediate heat treatments considerably increased the formability of SS 321. Texture analysis showed that the effect of the intermediate heat treatment on the austenite was minor and led to partial recrystallization, while deformation was observed to reinforce the crystallographic texture of austenite. For the SIM, an Olson-Cohen equation type was identified to analytically predict its formation during the multi-step forming process. The generated SIM was textured and weakened with increasing deformation.
Separation process using pervaporation and dephlegmation
Vane, Leland M.; Mairal, Anurag P.; Ng, Alvin; Alvarez, Franklin R.; Baker, Richard W.
2004-06-29
A process for treating liquids containing organic compounds and water. The process includes a pervaporation step in conjunction with a dephlegmation step to treat at least a portion of the permeate vapor from the pervaporation step. The process yields a membrane residue stream, a stream enriched in the more volatile component (usually the organic) as the overhead stream from the dephlegmator and a condensate stream enriched in the less volatile component (usually the water) as a bottoms stream from the dephlegmator. Any of these may be the principal product of the process. The membrane separation step may also be performed in the vapor phase, or by membrane distillation.
NASA Astrophysics Data System (ADS)
Yang, Yao-Joe; Kuo, Wen-Cheng; Fan, Kuang-Chao
2006-01-01
In this work, we present a single-run single-mask (SRM) process for fabricating suspended high-aspect-ratio structures on standard silicon wafers using an inductively coupled plasma-reactive ion etching (ICP-RIE) etcher. This process eliminates extra fabrication steps which are required for structure release after trench etching. Released microstructures with 120 μm thickness are obtained by this process. The corresponding maximum aspect ratio of the trench is 28. The SRM process is an extended version of the standard process proposed by BOSCH GmbH (BOSCH process). The first step of the SRM process is a standard BOSCH process for trench etching, then a polymer layer is deposited on trench sidewalls as a protective layer for the subsequent structure-releasing step. The structure is released by dry isotropic etching after the polymer layer on the trench floor is removed. All the steps can be integrated into a single-run ICP process. Also, only one mask is required. Therefore, the process complexity and fabrication cost can be effectively reduced. Discussions on each SRM step and considerations for avoiding undesired etching of the silicon structures during the release process are also presented.
Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A
2013-08-01
Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.
Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2006-01-17
The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.
Data-based control of a multi-step forming process
NASA Astrophysics Data System (ADS)
Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.
2017-09-01
The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.
Lively, Brooks; Kumar, Sandeep; Tian, Liu; Li, Bin; Zhong, Wei-Hong
2011-05-01
In this study we report the advantages of a 2-step method that incorporates an additional process pre-conditioning step for rapid and precise blending of the constituents prior to the commonly used melt compounding method for preparing polycarbonate/oxidized carbon nanofiber composites. This additional step (equivalent to a manufacturing cell) involves the formation of a highly concentrated solid nano-nectar of polycarbonate/carbon nanofiber composite using a solution mixing process followed by melt mixing with pure polycarbonate. This combined method yields excellent dispersion and improved mechanical and thermal properties as compared to the 1-step melt mixing method. The test results indicated that inclusion of carbon nanofibers into composites via the 2-step method resulted in dramatically reduced ( 48% lower) coefficient of thermal expansion compared to that of pure polycarbonate and 30% lower than that from the 1-step processing, at the same loading of 1.0 wt%. Improvements were also found in dynamic mechanical analysis and flexural mechanical properties. The 2-step approach is more precise and leads to better dispersion, higher quality, consistency, and improved performance in critical application areas. It is also consistent with Lean Manufacturing principles in which manufacturing cells are linked together using less of the key resources and creates a smoother production flow. Therefore, this 2-step process can be more attractive for industry.
Oxidation-driven surface dynamics on NiAl(100)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Hailang; Chen, Xidong; Li, Liang
Atomic steps, a defect common to all crystal surfaces, can play an important role in many physical and chemical processes. However, attempts to predict surface dynamics under nonequilibrium conditions are usually frustrated by poor knowledge of the atomic processes of surface motion arising from mass transport from/to surface steps. Using low-energy electron microscopy that spatially and temporally resolves oxide film growth during the oxidation of NiAl(100) we demonstrate that surface steps are impermeable to oxide film growth. The advancement of the oxide occurs exclusively on the same terrace and requires the coordinated migration of surface steps. The resulting piling upmore » of surface steps ahead of the oxide growth front progressively impedes the oxide growth. This process is reversed during oxide decomposition. The migration of the substrate steps is found to be a surface-step version of the well-known Hele-Shaw problem, governed by detachment (attachment) of Al atoms at step edges induced by the oxide growth (decomposition). As a result, by comparing with the oxidation of NiAl(110) that exhibits unimpeded oxide film growth over substrate steps, we suggest that whenever steps are the source of atoms used for oxide growth they limit the oxidation process; when atoms are supplied from the bulk, the oxidation rate is not limited by the motion of surface steps.« less
Oxidation-driven surface dynamics on NiAl(100)
Qin, Hailang; Chen, Xidong; Li, Liang; ...
2014-12-29
Atomic steps, a defect common to all crystal surfaces, can play an important role in many physical and chemical processes. However, attempts to predict surface dynamics under nonequilibrium conditions are usually frustrated by poor knowledge of the atomic processes of surface motion arising from mass transport from/to surface steps. Using low-energy electron microscopy that spatially and temporally resolves oxide film growth during the oxidation of NiAl(100) we demonstrate that surface steps are impermeable to oxide film growth. The advancement of the oxide occurs exclusively on the same terrace and requires the coordinated migration of surface steps. The resulting piling upmore » of surface steps ahead of the oxide growth front progressively impedes the oxide growth. This process is reversed during oxide decomposition. The migration of the substrate steps is found to be a surface-step version of the well-known Hele-Shaw problem, governed by detachment (attachment) of Al atoms at step edges induced by the oxide growth (decomposition). As a result, by comparing with the oxidation of NiAl(110) that exhibits unimpeded oxide film growth over substrate steps, we suggest that whenever steps are the source of atoms used for oxide growth they limit the oxidation process; when atoms are supplied from the bulk, the oxidation rate is not limited by the motion of surface steps.« less
Guiding gate-etch process development using 3D surface reaction modeling for 7nm and beyond
NASA Astrophysics Data System (ADS)
Dunn, Derren; Sporre, John R.; Deshpande, Vaibhav; Oulmane, Mohamed; Gull, Ronald; Ventzek, Peter; Ranjan, Alok
2017-03-01
Increasingly, advanced process nodes such as 7nm (N7) are fundamentally 3D and require stringent control of critical dimensions over high aspect ratio features. Process integration in these nodes requires a deep understanding of complex physical mechanisms to control critical dimensions from lithography through final etch. Polysilicon gate etch processes are critical steps in several device architectures for advanced nodes that rely on self-aligned patterning approaches to gate definition. These processes are required to meet several key metrics: (a) vertical etch profiles over high aspect ratios; (b) clean gate sidewalls free of etch process residue; (c) minimal erosion of liner oxide films protecting key architectural elements such as fins; and (e) residue free corners at gate interfaces with critical device elements. In this study, we explore how hybrid modeling approaches can be used to model a multi-step finFET polysilicon gate etch process. Initial parts of the patterning process through hardmask assembly are modeled using process emulation. Important aspects of gate definition are then modeled using a particle Monte Carlo (PMC) feature scale model that incorporates surface chemical reactions.1 When necessary, species and energy flux inputs to the PMC model are derived from simulations of the etch chamber. The modeled polysilicon gate etch process consists of several steps including a hard mask breakthrough step (BT), main feature etch steps (ME), and over-etch steps (OE) that control gate profiles at the gate fin interface. An additional constraint on this etch flow is that fin spacer oxides are left intact after final profile tuning steps. A natural optimization required from these processes is to maximize vertical gate profiles while minimizing erosion of fin spacer films.2
Adsorption process to recover hydrogen from feed gas mixtures having low hydrogen concentration
Golden, Timothy Christopher; Weist, Jr., Edward Landis; Hufton, Jeffrey Raymond; Novosat, Paul Anthony
2010-04-13
A process for selectively separating hydrogen from at least one more strongly adsorbable component in a plurality of adsorption beds to produce a hydrogen-rich product gas from a low hydrogen concentration feed with a high recovery rate. Each of the plurality of adsorption beds subjected to a repetitive cycle. The process comprises an adsorption step for producing the hydrogen-rich product from a feed gas mixture comprising 5% to 50% hydrogen, at least two pressure equalization by void space gas withdrawal steps, a provide purge step resulting in a first pressure decrease, a blowdown step resulting in a second pressure decrease, a purge step, at least two pressure equalization by void space gas introduction steps, and a repressurization step. The second pressure decrease is at least 2 times greater than the first pressure decrease.
Ingham, Richard J; Battilocchio, Claudio; Fitzpatrick, Daniel E; Sliwinski, Eric; Hawkins, Joel M; Ley, Steven V
2015-01-01
Performing reactions in flow can offer major advantages over batch methods. However, laboratory flow chemistry processes are currently often limited to single steps or short sequences due to the complexity involved with operating a multi-step process. Using new modular components for downstream processing, coupled with control technologies, more advanced multi-step flow sequences can be realized. These tools are applied to the synthesis of 2-aminoadamantane-2-carboxylic acid. A system comprising three chemistry steps and three workup steps was developed, having sufficient autonomy and self-regulation to be managed by a single operator. PMID:25377747
Ye, Jianchu; Tu, Song; Sha, Yong
2010-10-01
For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.
Array automated assembly task low cost silicon solar array project, phase 2
NASA Technical Reports Server (NTRS)
Olson, C.
1980-01-01
Analyses of solar cell and module process steps for throughput rate, cost effectiveness, and reproductibility are reported. In addition to the concentration on cell and module processing sequences, an investigation was made into the capability of using microwave energy in the diffusion, sintering, and thick film firing steps of cell processing. Although the entire process sequence was integrated, the steps are treated individually with test and experimental data, conclusions, and recommendations.
Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu
2016-01-01
A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.
Use of aluminum phosphate as the dehydration catalyst in single step dimethyl ether process
Peng, Xiang-Dong; Parris, Gene E.; Toseland, Bernard A.; Battavio, Paula J.
1998-01-01
The present invention pertains to a process for the coproduction of methanol and dimethyl ether (DME) directly from a synthesis gas in a single step (hereafter, the "single step DME process"). In this process, the synthesis gas comprising hydrogen and carbon oxides is contacted with a dual catalyst system comprising a physical mixture of a methanol synthesis catalyst and a methanol dehydration catalyst. The present invention is an improvement to this process for providing an active and stable catalyst system. The improvement comprises the use of an aluminum phosphate based catalyst as the methanol dehydration catalyst. Due to its moderate acidity, such a catalyst avoids the coke formation and catalyst interaction problems associated with the conventional dual catalyst systems taught for the single step DME process.
Fostering Autonomy through Syllabus Design: A Step-by-Step Guide for Success
ERIC Educational Resources Information Center
Ramírez Espinosa, Alexánder
2016-01-01
Promoting learner autonomy is relevant in the field of applied linguistics due to the multiple benefits it brings to the process of learning a new language. However, despite the vast array of research on how to foster autonomy in the language classroom, it is difficult to find step-by-step processes to design syllabi and curricula focused on the…
Zhao, Fanglong; Zhang, Chuanbo; Yin, Jing; Shen, Yueqi; Lu, Wenyu
2015-08-01
In this paper, a two-step resin adsorption technology was investigated for spinosad production and separation as follows: the first step resin addition into the fermentor at early cultivation period to decrease the timely product concentration in the broth; the second step of resin addition was used after fermentation to adsorb and extract the spinosad. Based on this, a two-step macroporous resin adsorption-membrane separation process for spinosad fermentation, separation, and purification was established. Spinosad concentration in 5-L fermentor increased by 14.45 % after adding 50 g/L macroporous at the beginning of fermentation. The established two-step macroporous resin adsorption-membrane separation process got the 95.43 % purity and 87 % yield for spinosad, which were both higher than that of the conventional crystallization of spinosad from aqueous phase that were 93.23 and 79.15 % separately. The two-step macroporous resin adsorption method has not only carried out the coupling of spinosad fermentation and separation but also increased spinosad productivity. In addition, the two-step macroporous resin adsorption-membrane separation process performs better in spinosad yield and purity.
Code of Federal Regulations, 2011 CFR
2011-10-01
... collection process before the employee provides a urine specimen? 40.63 Section 40.63 Transportation Office... PROGRAMS Urine Specimen Collections § 40.63 What steps does the collector take in the collection process before the employee provides a urine specimen? As the collector, you must take the following steps before...
Code of Federal Regulations, 2013 CFR
2013-10-01
... collection process before the employee provides a urine specimen? 40.63 Section 40.63 Transportation Office... PROGRAMS Urine Specimen Collections § 40.63 What steps does the collector take in the collection process before the employee provides a urine specimen? As the collector, you must take the following steps before...
Code of Federal Regulations, 2014 CFR
2014-10-01
... collection process before the employee provides a urine specimen? 40.63 Section 40.63 Transportation Office... PROGRAMS Urine Specimen Collections § 40.63 What steps does the collector take in the collection process before the employee provides a urine specimen? As the collector, you must take the following steps before...
Code of Federal Regulations, 2012 CFR
2012-10-01
... collection process before the employee provides a urine specimen? 40.63 Section 40.63 Transportation Office... PROGRAMS Urine Specimen Collections § 40.63 What steps does the collector take in the collection process before the employee provides a urine specimen? As the collector, you must take the following steps before...
Carol Clausen
2004-01-01
In this study, three possible improvements to a remediation process for chromated-copper-arsenate (CCA) treated wood were evaluated. The process involves two steps: oxalic acid extraction of wood fiber followed by bacterial culture with Bacillus licheniformis CC01. The three potential improvements to the oxalic acid extraction step were (1) reusing oxalic acid for...
ERIC Educational Resources Information Center
Werner, Linda; McDowell, Charlie; Denner, Jill
2013-01-01
Educational data mining can miss or misidentify key findings about student learning without a transparent process of analyzing the data. This paper describes the first steps in the process of using low-level logging data to understand how middle school students used Alice, an initial programming environment. We describe the steps that were…
NASA Astrophysics Data System (ADS)
Bissadi, Golnaz
Hybrid membranes represent a promising alternative to the limitations of organic and inorganic materials for high productivity and selectivity gas separation membranes. In this study, the previously developed concept of emulsion-polymerized mixed matrix (EPMM) membranes was further advanced by investigating the effects of surfactant and compatibilizer on inorganic loading in poly(2,6-dimethyl-1,4-phenylene oxide) (PPO)-based EPMM membranes, in which inorganic part of the membranes originated from tetraethylorthosilicate (TEOS). The polymerization of TEOS, which consists of hydrolysis of TEOS and condensation of the hydrolyzed TEOS, was carried out as (i) one- and (ii) two-step processes. In the one-step process, the hydrolysis and condensation take place in the same environment of a weak acid provided by the aqueous solution of aluminum hydroxonitrate and sodium carbonate. In the two-step process, the hydrolysis takes place in the environment of a strong acid (solution of hydrochloric acid), whereas the condensation takes place in weak base environment obtained by adding excess of the ammonium hydroxide solution to the acidic solution of the hydrolyzed TEOS. For both one- and two-step processes, the emulsion polymerization of TEOS was carried out in two types of emulsions made of (i) pure trichloroethylene (TCE) solvent, and (ii) 10 w/v% solution of PPO in TCE, using different combinations of the compatibilizer (ethanol) and the surfactant (n-octanol). The experiments with pure TCE, which are referred to as a gravimetric powder method (GPM) allowed assessing the effect of different experimental parameters on the conversion of TEOS. The GPM tests also provided a guide for the synthesis of casting emulsions containing PPO, from which the EPMM membranes were prepared using a spin coating technique. The synthesized EPMM membranes were characterized using 29Si nuclear magnetic resonance (29Si NMR), differential scanning calorimetry (DSC), inductively coupled plasma mass spectrometry (ICP-MS), and gas permeation measurements carried out in a constant pressure (CP) system. The 29Si NMR analysis verified polymerization of TEOS in the emulsions made of pure TCE, and the PPO solution in TCE. The conversions of TEOS in the two-step process in the two types of emulsions were very close to each other. In the case of the one-step process, the conversions in the TCE emulsion were significantly greater than those in the emulsion of the PPO solution in TCE. Consequently, the conversions of TEOS in the EPMM membranes made in the two-step process were greater than those in the EPMM membranes made in the one-step process. The latter ranged between 10 - 20%, while the highest conversion in the two-step process was 74% in the presence of pure compatibilizer with no surfactant. Despite greater conversions and hence the greater inorganic loadings, the EPMM membranes prepared in the two-step process had glass transition temperatures (Tg) only slightly greater than the reference PPO membranes. In contrast, despite relatively low inorganic loadings, the EPMM membranes prepared in the one-step process had Tgs markedly greater than PPO, and showed the expected trend of an increase in Tg with the inorganic loading. These results indicate that in the case of the one-step process the polymerized TEOS was well integrated with the PPO chains and the interactions between the two phases lead to high Tgs. On the other hand, this was not the case for the EPMM membranes prepared in the two-step process, suggesting possible phase separation between the polymerized TEOS and the organic phase. The latter was confirmed by detecting no selectivity in the EPMM membranes prepared by the two-step process. In contrast, the EPMM membranes prepared in the one-step process in the presence of the compatibilizer and no surfactant showed 50% greater O2 permeability coefficient and a slightly greater O2/N2 permeability ratio compared to the reference PPO membranes.
EMAGE mouse embryo spatial gene expression database: 2010 update
Richardson, Lorna; Venkataraman, Shanmugasundaram; Stevenson, Peter; Yang, Yiya; Burton, Nicholas; Rao, Jianguo; Fisher, Malcolm; Baldock, Richard A.; Davidson, Duncan R.; Christiansen, Jeffrey H.
2010-01-01
EMAGE (http://www.emouseatlas.org/emage) is a freely available online database of in situ gene expression patterns in the developing mouse embryo. Gene expression domains from raw images are extracted and integrated spatially into a set of standard 3D virtual mouse embryos at different stages of development, which allows data interrogation by spatial methods. An anatomy ontology is also used to describe sites of expression, which allows data to be queried using text-based methods. Here, we describe recent enhancements to EMAGE including: the release of a completely re-designed website, which offers integration of many different search functions in HTML web pages, improved user feedback and the ability to find similar expression patterns at the click of a button; back-end refactoring from an object oriented to relational architecture, allowing associated SQL access; and the provision of further access by standard formatted URLs and a Java API. We have also increased data coverage by sourcing from a greater selection of journals and developed automated methods for spatial data annotation that are being applied to spatially incorporate the genome-wide (∼19 000 gene) ‘EURExpress’ dataset into EMAGE. PMID:19767607
SuperB Simulation Production System
NASA Astrophysics Data System (ADS)
Tomassetti, L.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Del Prete, D.; Di Simone, A.; Donvito, G.; Fella, A.; Franchini, P.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Paolini, A.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.
2012-12-01
The SuperB asymmetric e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a peak luminosity of 1036 cm-2 s-1. The SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from two continents and Grid Flavors. In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. Results from the last official SuperB production cycle will be reported.
Phage Therapy in the Era of Synthetic Biology.
Barbu, E Magda; Cady, Kyle C; Hubby, Bolyn
2016-10-03
For more than a century, bacteriophage (or phage) research has enabled some of the most important discoveries in biological sciences and has equipped scientists with many of the molecular biology tools that have advanced our understanding of replication, maintenance, and expression of genetic material. Phages have also been recognized and exploited as natural antimicrobial agents and nanovectors for gene therapy, but their potential as therapeutics has not been fully exploited in Western medicine because of challenges such as narrow host range, bacterial resistance, and unique pharmacokinetics. However, increasing concern related to the emergence of bacteria resistant to multiple antibiotics has heightened interest in phage therapy and the development of strategies to overcome hurdles associated with bacteriophage therapeutics. Recent progress in sequencing technologies, DNA manipulation, and synthetic biology allowed scientists to refactor the entire bacterial genome of Mycoplasma mycoides, thereby creating the first synthetic cell. These new strategies for engineering genomes may have the potential to accelerate the construction of designer phage genomes with superior therapeutic potential. Here, we discuss the use of phage as therapeutics, as well as how synthetic biology can create bacteriophage with desirable attributes. Copyright © 2016 Cold Spring Harbor Laboratory Press; all rights reserved.
Next Generation Sequencing of Actinobacteria for the Discovery of Novel Natural Products
Gomez-Escribano, Juan Pablo; Alt, Silke; Bibb, Mervyn J.
2016-01-01
Like many fields of the biosciences, actinomycete natural products research has been revolutionised by next-generation DNA sequencing (NGS). Hundreds of new genome sequences from actinobacteria are made public every year, many of them as a result of projects aimed at identifying new natural products and their biosynthetic pathways through genome mining. Advances in these technologies in the last five years have meant not only a reduction in the cost of whole genome sequencing, but also a substantial increase in the quality of the data, having moved from obtaining a draft genome sequence comprised of several hundred short contigs, sometimes of doubtful reliability, to the possibility of obtaining an almost complete and accurate chromosome sequence in a single contig, allowing a detailed study of gene clusters and the design of strategies for refactoring and full gene cluster synthesis. The impact that these technologies are having in the discovery and study of natural products from actinobacteria, including those from the marine environment, is only starting to be realised. In this review we provide a historical perspective of the field, analyse the strengths and limitations of the most relevant technologies, and share the insights acquired during our genome mining projects. PMID:27089350
Systemic safety project selection tool.
DOT National Transportation Integrated Search
2013-07-01
"The Systemic Safety Project Selection Tool presents a process for incorporating systemic safety planning into traditional safety management processes. The Systemic Tool provides a step-by-step process for conducting systemic safety analysis; conside...
The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns
NASA Astrophysics Data System (ADS)
Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo
Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.
Processes for producing low cost, high efficiency silicon solar cells
Rohatgi, Ajeet; Doshi, Parag; Tate, John Keith; Mejia, Jose; Chen, Zhizhang
1998-06-16
Processes which utilize rapid thermal processing (RTP) are provided for inexpensively producing high efficiency silicon solar cells. The RTP processes preserve minority carrier bulk lifetime .tau. and permit selective adjustment of the depth of the diffused regions, including emitter and back surface field (bsf), within the silicon substrate. In a first RTP process, an RTP step is utilized to simultaneously diffuse phosphorus and aluminum into the front and back surfaces, respectively, of a silicon substrate. Moreover, an in situ controlled cooling procedure preserves the carrier bulk lifetime .tau. and permits selective adjustment of the depth of the diffused regions. In a second RTP process, both simultaneous diffusion of the phosphorus and aluminum as well as annealing of the front and back contacts are accomplished during the RTP step. In a third RTP process, the RTP step accomplishes simultaneous diffusion of the phosphorus and aluminum, annealing of the contacts, and annealing of a double-layer antireflection/passivation coating SiN/SiO.sub.x. In a fourth RTP process, the process of applying front and back contacts is broken up into two separate respective steps, which enhances the efficiency of the cells, at a slight time expense. In a fifth RTP process, a second RTP step is utilized to fire and adhere the screen printed or evaporated contacts to the structure.
NASA Astrophysics Data System (ADS)
Benlattar, M.; El koraychy, E.; Kotri, A.; Mazroui, M.
2017-12-01
We have used molecular dynamics simulations combined with an interatomic potential derived from the embedded atom method, to investigate the hetero-diffusion of Au adatom near a stepped Ag(110) surface with the height of one monoatomic layer. The activation energies for different diffusion processes, which occur on the terrace and near the step edge, are calculated both by molecular statics and molecular dynamics simulations. Static energies are found by the drag method, whereas the dynamic barriers are computed at high temperature from the Arrhenius plots. Our numerical results reveal that the jump process requires very high activation energy compared to the exchange process either on the terrace or near the step edge. In this work, other processes, such as upward and downward diffusion at step edges, have also been discussed.
45 CFR 16.7 - The first steps in the appeal process: The notice of appeal and the Board's response.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false The first steps in the appeal process: The notice... SERVICES GENERAL ADMINISTRATION PROCEDURES OF THE DEPARTMENTAL GRANT APPEALS BOARD § 16.7 The first steps... of these procedures, and advise the appellant of the next steps. The Board will also send a copy of...
Application of a 2-step process for the biological treatment of sulfidic spent caustics.
de Graaff, Marco; Klok, Johannes B M; Bijmans, Martijn F M; Muyzer, Gerard; Janssen, Albert J H
2012-03-01
This research demonstrates the feasibility and advantages of a 2-step process for the biological treatment of sulfidic spent caustics under halo-alkaline conditions (i.e. pH 9.5; Na(+) = 0.8 M). Experiments with synthetically prepared solutions were performed in a continuously fed system consisting of two gas-lift reactors in series operated at aerobic conditions at 35 °C. The detoxification of sulfide to thiosulfate in the first step allowed the successful biological treatment of total-S loading rates up to 33 mmol L(-1) day(-1). In the second, biological step, the remaining sulfide and thiosulfate was completely converted to sulfate by haloalkaliphilic sulfide oxidizing bacteria. Mathematical modeling of the 2-step process shows that under the prevailing conditions an optimal reactor configuration consists of 40% 'abiotic' and 60% 'biological' volume, whilst the total reactor volume is 22% smaller than for the 1-step process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false The next step in the appeal process: Preparation of an appeal file and written argument. 16.8 Section 16.8 Public Welfare DEPARTMENT OF HEALTH AND... step in the appeal process: Preparation of an appeal file and written argument. Except in expedited...
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
Park, Jae-Min; Jang, Se Jin; Lee, Sang-Ick; Lee, Won-Jun
2018-03-14
We designed cyclosilazane-type silicon precursors and proposed a three-step plasma-enhanced atomic layer deposition (PEALD) process to prepare silicon nitride films with high quality and excellent step coverage. The cyclosilazane-type precursor, 1,3-di-isopropylamino-2,4-dimethylcyclosilazane (CSN-2), has a closed ring structure for good thermal stability and high reactivity. CSN-2 showed thermal stability up to 450 °C and a sufficient vapor pressure of 4 Torr at 60 °C. The energy for the chemisorption of CSN-2 on the undercoordinated silicon nitride surface as calculated by density functional theory method was -7.38 eV. The PEALD process window was between 200 and 500 °C, with a growth rate of 0.43 Å/cycle. The best film quality was obtained at 500 °C, with hydrogen impurity of ∼7 atom %, oxygen impurity less than 2 atom %, low wet etching rate, and excellent step coverage of ∼95%. At 300 °C and lower temperatures, the wet etching rate was high especially at the lower sidewall of the trench pattern. We introduced the three-step PEALD process to improve the film quality and the step coverage on the lower sidewall. The sequence of the three-step PEALD process consists of the CSN-2 feeding step, the NH 3 /N 2 plasma step, and the N 2 plasma step. The H radicals in NH 3 /N 2 plasma efficiently remove the ligands from the precursor, and the N 2 plasma after the NH 3 plasma removes the surface hydrogen atoms to activate the adsorption of the precursor. The films deposited at 300 °C using the novel precursor and the three-step PEALD process showed a significantly improved step coverage of ∼95% and an excellent wet etching resistance at the lower sidewall, which is only twice as high as that of the blanket film prepared by low-pressure chemical vapor deposition.
24 CFR 55.20 - Decision making process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Decision making process. 55.20... Decision making process. The decision making process for compliance with this part contains eight steps... decision making process are: (a) Step 1. Determine whether the proposed action is located in a 100-year...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Yu; Lei, Jixue; Yin, Bing
2014-03-17
A simple two-step hydrothermal process was proposed for enhancing the performance of the nanogenerator on flexible and wearable terylene-fabric substrate. With this method, a significant enhancement in output voltage of the nanogenerator from ∼10 mV to 7 V was achieved, comparing with the one by conventional one-step process. In addition, another advantage with the devices synthesized by two-step hydrothermal process was that their output voltages are only sensitive to strain rather than strain rate. The devices with a high output voltage have the ability to power common electric devices and will have important applications in flexible electronics and wearable devices.
The developmental processes for NANDA International Nursing Diagnoses.
Scroggins, Leann M
2008-01-01
This study aims to provide a step-by-step procedural guideline for the development of a nursing diagnosis that meets the necessary criteria for inclusion in the NANDA International and NNN classification systems. The guideline is based on the processes developed by the Diagnosis Development Committee of NANDA International and includes the necessary processes for development of Actual, Wellness, Health Promotion, and Risk nursing diagnoses. Definitions of Actual, Wellness, Health Promotion, and Risk nursing diagnoses along with inclusion criteria and taxonomy rules have been incorporated into the guideline to streamline the development and review processes for submitted diagnoses. A step-by-step procedural guideline will assist the submitter to move efficiently and effectively through the submission process, resulting in increased submissions and enhancement of the NANDA International and NNN classification systems.
Method and apparatus for automated assembly
Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.
1999-01-01
A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.
Einterz, E M; Younge, O; Hadi, C
2018-06-01
To determine, subsequent to the expansion of a county health department's refugee screening process from a one-step to a two-step process, the change in early loss to follow-up and time to initiation of treatment of new refugees with latent tuberculosis infection (LTBI). Quasi-experimental, quantitative. Review of patient medical records. Among 384 refugees who met the case definition of LTBI without prior tuberculosis (TB) classification, the number of cases lost to early follow-up fell from 12.5% to 0% after expansion to a two-step screening process. The average interval between in-country arrival and initiation of LTBI treatment was shortened by 41.4%. The addition of a second step to the refugee screening process was correlated with significant improvements in the county's success in tracking and treating cases of LTBI in refugees. Given the disproportionate importance of foreign-born cases of LTBI to the incidence of TB disease in low-incidence countries, these improvements could have a substantial impact on overall TB control, and the process described could serve as a model for other local health department refugee screening programs. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
A comparison between atmospheric/humidity and vacuum cyanoacrylate fuming of latent fingermarks.
Farrugia, Kevin J; Fraser, Joanna; Friel, Lauren; Adams, Duncan; Attard-Montalto, Nicola; Deacon, Paul
2015-12-01
A number of pseudo-operational trials were set up to compare the atmospheric/humidity and vacuum cyanoacrylate fuming processes on plastic carrier bags. The fuming processes were compared using two-step cyanoacrylate fuming with basic yellow 40 (BY40) staining and a one-step fluorescent cyanoacrylate fuming, Lumicyano 4%. Preliminary work using planted fingermarks and split depletions were performed to identify the optimum vacuum fuming conditions. The first pseudo-operational trial compared the different fuming conditions (atmospheric/humidity vs. vacuum) for the two-step process where an additional 50% more marks were detected with the atmospheric/humidity process. None of the marks by the vacuum process could be observed visually; however, a significant number of marks were detected by fluorescence after BY40 staining. The second trial repeated the same work in trial 1 using the one-step cyanoacrylate process, Lumicyano at a concentration of 4%. Trial 2 provided comparable results to trial 1 and all the items were then re-treated with Lumicyano 4% at atmospheric/humidity conditions before dyeing with BY40 to provide the sequences of process A (Lumicyano 4% atmospheric-Lumicyano 4% atmospheric-BY40) and process B (Lumicyano 4% vacuum-Lumicyano 4% atmospheric-BY40). The number of marks (visual and fluorescent) was counted after each treatment with a substantial increase in the number of detected marks in the second and third treatments of the process. The increased detection rate after the double Lumicyano process was unexpected and may have important implications. Trial 3 was performed to investigate whether the amount of cyanoacrylate and/or fuming time had an impact on the results observed in trial 2 whereas trial 4 assessed if the double process using conventional cyanoacrylate, rather than Lumicyano 4%, provided an increased detection rate. Trials 3 and 4 confirmed that doubling the amount of Lumicyano 4% cyanoacrylate and fuming time produced a lower detection rate than the double process with Lumicyano 4%. Furthermore, the double process with conventional cyanoacrylate did not provide any benefit. Scanning electron microscopy was also performed to investigate the morphology of the cyanoacrylate polymer under different conditions. The atmospheric/humidity process appears to be superior to the vacuum process for both the two-step and one-step cyanoacrylate fuming, although the two-step process performed better in comparison to the one-step process under vacuum conditions. Nonetheless, the use of vacuum cyanoacrylate fuming may have certain operational advantages and its use does not adversely affect subsequent cyanoacrylate fuming with atmospheric/humidity conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2005-05-03
A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.
The Relaxation of Vicinal (001) with ZigZag [110] Steps
NASA Astrophysics Data System (ADS)
Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.
2012-02-01
This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.
NASA Astrophysics Data System (ADS)
Yao, Jianzhuang; Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo
2016-02-01
Extensive computational modeling and simulations have been carried out, in the present study, to uncover the fundamental reaction pathway for butyrylcholinesterase (BChE)-catalyzed hydrolysis of ghrelin, demonstrating that the acylation process of BChE-catalyzed hydrolysis of ghrelin follows an unprecedented single-step reaction pathway and the single-step acylation process is rate-determining. The free energy barrier (18.8 kcal/mol) calculated for the rate-determining step is reasonably close to the experimentally-derived free energy barrier (~19.4 kcal/mol), suggesting that the obtained mechanistic insights are reasonable. The single-step reaction pathway for the acylation is remarkably different from the well-known two-step acylation reaction pathway for numerous ester hydrolysis reactions catalyzed by a serine esterase. This is the first time demonstrating that a single-step reaction pathway is possible for an ester hydrolysis reaction catalyzed by a serine esterase and, therefore, one no longer can simply assume that the acylation process must follow the well-known two-step reaction pathway.
Sequential control of step-bunching during graphene growth on SiC (0001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Jianfeng; Kusunoki, Michiko; Yasui, Osamu
2016-08-22
We have investigated the relation between the step-bunching and graphene growth phenomena on an SiC substrate. We found that only a minimum amount of step-bunching occurred during the graphene growth process with a high heating rate. On the other hand, a large amount of step-bunching occurred using a slow heating process. These results indicated that we can control the degree of step-bunching during graphene growth by controlling the heating rate. We also found that graphene coverage suppressed step bunching, which is an effective methodology not only in the graphene technology but also in the SiC-based power electronics.
Processes for producing low cost, high efficiency silicon solar cells
Rohatgi, A.; Doshi, P.; Tate, J.K.; Mejia, J.; Chen, Z.
1998-06-16
Processes which utilize rapid thermal processing (RTP) are provided for inexpensively producing high efficiency silicon solar cells. The RTP processes preserve minority carrier bulk lifetime {tau} and permit selective adjustment of the depth of the diffused regions, including emitter and back surface field (bsf), within the silicon substrate. In a first RTP process, an RTP step is utilized to simultaneously diffuse phosphorus and aluminum into the front and back surfaces, respectively, of a silicon substrate. Moreover, an in situ controlled cooling procedure preserves the carrier bulk lifetime {tau} and permits selective adjustment of the depth of the diffused regions. In a second RTP process, both simultaneous diffusion of the phosphorus and aluminum as well as annealing of the front and back contacts are accomplished during the RTP step. In a third RTP process, the RTP step accomplishes simultaneous diffusion of the phosphorus and aluminum, annealing of the contacts, and annealing of a double-layer antireflection/passivation coating SiN/SiO{sub x}. In a fourth RTP process, the process of applying front and back contacts is broken up into two separate respective steps, which enhances the efficiency of the cells, at a slight time expense. In a fifth RTP process, a second RTP step is utilized to fire and adhere the screen printed or evaporated contacts to the structure. 28 figs.
Xu, Yongxiang; Yuan, Shenpo; Han, Jianmin; Lin, Hong; Zhang, Xuehui
2017-11-15
The development of scaffolds to mimic the gradient structure of natural tissue is an important consideration for effective tissue engineering. In the present study, a physical cross-linking chitosan hydrogel with gradient structures was fabricated via a step-by-step cross-linking process using sodium tripolyphosphate and sodium hydroxide as sequential cross-linkers. Chitosan hydrogels with different structures (single, double, and triple layers) were prepared by modifying the gelling process. The properties of the hydrogels were further adjusted by varying the gelling conditions, such as gelling time, pH, and composition of the crosslinking solution. Slight cytotoxicity was showed in MTT assay for hydrogels with uncross-linking chitosan solution and non-cytotoxicity was showed for other hydrogels. The results suggest that step-by-step cross-linking represents a practicable method to fabricate scaffolds with gradient structures. Copyright © 2017. Published by Elsevier Ltd.
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Making DATA Work: A Process for Conducting Action Research
ERIC Educational Resources Information Center
Young, Anita; Kaffenberger, Carol
2013-01-01
This conceptual model introduces a process to help school counselors use data to drive decision making and offers examples to implement the process. A step-by-step process is offered to help school counselors and school counselor supervisors address educational issues, close achievement gaps, and demonstrate program effectiveness. To illustrate…
ERIC Educational Resources Information Center
Stille, J. K.
1981-01-01
Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)
77 FR 67340 - National Fire Codes: Request for Comments on NFPA's Codes and Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
... the process. The Code Revision Process contains four basic steps that are followed for developing new documents as well as revising existing documents. Step 1: Public Input Stage, which results in the First Draft Report (formerly ROP); Step 2: Comment Stage, which results in the Second Draft Report (formerly...
2003-03-01
BUSINESS PROCESS REDESIGN.............................................................22 F . SECOND WAVE BPR...Receive/Review/Decide on Lot Data.........................58 f . Step 7 – Create Cross-Leveling Request.................................60 g. Step 8...Compile Cross-Leveling Request...............................60 h. Steps 9 and 10 – (No Change) ................................................60 F
Lights, Camera, Action: Facilitating the Design and Production of Effective Instructional Videos
ERIC Educational Resources Information Center
Di Paolo, Terry; Wakefield, Jenny S.; Mills, Leila A.; Baker, Laura
2017-01-01
This paper outlines a rudimentary process intended to guide faculty in K-12 and higher education through the steps involved to produce video for their classes. The process comprises four steps: planning, development, delivery and reflection. Each step is infused with instructional design information intended to support the collaboration between…
Ren, Xiaojun; Deng, Ruijie; Wang, Lida; Zhang, Kaixiang; Li, Jinghong
2017-08-01
RNA splicing, which mainly involves two transesterification steps, is a fundamental process of gene expression and its abnormal regulation contributes to serious genetic diseases. Antisense oligonucleotides (ASOs) are genetic control tools that can be used to specifically control genes through alteration of the RNA splicing pathway. Despite intensive research, how ASOs or various other factors influence the multiple processes of RNA splicing still remains obscure. This is largely due to an inability to analyze the splicing efficiency of each step in the RNA splicing process with high sensitivity. We addressed this limitation by introducing a padlock probe-based isothermal amplification assay to achieve quantification of the specific products in different splicing steps. With this amplified assay, the roles that ASOs play in RNA splicing inhibition in the first and second steps could be distinguished. We identified that 5'-ASO could block RNA splicing by inhibiting the first step, while 3'-ASO could block RNA splicing by inhibiting the second step. This method provides a versatile tool for assisting efficient ASO design and discovering new splicing modulators and therapeutic drugs.
Conceptual analysis of Physiology of vision in Ayurveda.
Balakrishnan, Praveen; Ashwini, M J
2014-07-01
The process by which the world outside is seen is termed as visual process or physiology of vision. There are three phases in this visual process: phase of refraction of light, phase of conversion of light energy into electrical impulse and finally peripheral and central neurophysiology. With the advent of modern instruments step by step biochemical changes occurring at each level of the visual process has been deciphered. Many investigations have emerged to track these changes and helping to diagnose the exact nature of the disease. Ayurveda has described this physiology of vision based on the functions of vata and pitta. Philosophical textbook of ayurveda, Tarka Sangraha, gives certain basics facts of visual process. This article discusses the second and third phase of visual process. Step by step analysis of the visual process through the spectacles of ayurveda amalgamated with the basics of philosophy from Tarka Sangraha has been analyzed critically to generate a concrete idea regarding the physiology and hence thereby interpret the pathology on the grounds of ayurveda based on the investigative reports.
An Application of Business Process Management to Health Care Facilities.
Hassan, Mohsen M D
The purpose of this article is to help health care facility managers and personnel identify significant elements of their facilities to address, and steps and actions to follow, when applying business process management to them. The ABPMP (Association of Business Process Management Professionals) life-cycle model of business process management is adopted, and steps from Lean, business process reengineering, and Six Sigma, and actions from operations management are presented to implement it. Managers of health care facilities can find in business process management a more comprehensive approach to improving their facilities than Lean, Six Sigma, business process reengineering, and ad hoc approaches that does not conflict with them because many of their elements can be included under its umbrella. Furthermore, the suggested application of business process management can guide and relieve them from selecting among these approaches, as well as provide them with specific steps and actions that they can follow. This article fills a gap in the literature by presenting a much needed comprehensive application of business process management to health care facilities that has specific steps and actions for implementation.
Sun, Shaolong; Zhang, Lidan; Liu, Fang; Fan, Xiaolin; Sun, Run-Cang
2018-01-01
To increase the production of bioethanol, a two-step process based on hydrothermal and dilute alkaline treatment was applied to reduce the natural resistance of biomass. However, the process required a large amount of water and a long operation time due to the solid/liquid separation before the alkaline treatment, which led to decrease the pure economic profit for production of bioethanol. Therefore, four one-step processes based on order of hydrothermal and alkaline treatment have been developed to enhance concentration of glucose of wheat straw by enzymatic saccharification. The aim of the present study was to systematically evaluated effect for different one-step processes by analyzing the physicochemical properties (composition, structural change, crystallinity, surface morphology, and BET surface area) and enzymatic saccharification of the treated substrates. In this study, hemicelluloses and lignins were removed from wheat straw and the morphologic structures were destroyed to various extents during the four one-step processes, which were favorable for cellulase absorption on cellulose. A positive correlation was also observed between the crystallinity and enzymatic saccharification rate of the substrate under the conditions given. The surface area of the substrate was positively related to the concentration of glucose in this study. As compared to the control (3.0 g/L) and treated substrates (11.2-14.6 g/L) obtained by the other three one-step processes, the substrate treated by one-step process based on successively hydrothermal and alkaline treatment had a maximum glucose concentration of 18.6 g/L, which was due to the high cellulose concentration and surface area for the substrate, accompanying with removal of large amounts of lignins and hemicelluloses. The present study demonstrated that the order of hydrothermal and alkaline treatment had significant effects on the physicochemical properties and enzymatic saccharification of wheat straw. The one-step process based on successively hydrothermal and alkaline treatment is a simple operating and economical feasible method for the production of glucose, which will be further converted into bioethanol.
Reversing the conventional leather processing sequence for cleaner leather production.
Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari
2006-02-01
Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.
Smejkal, Benjamin; Agrawal, Neeraj J; Helk, Bernhard; Schulz, Henk; Giffard, Marion; Mechelke, Matthias; Ortner, Franziska; Heckmeier, Philipp; Trout, Bernhardt L; Hekmat, Dariusch
2013-09-01
The potential of process crystallization for purification of a therapeutic monoclonal IgG1 antibody was studied. The purified antibody was crystallized in non-agitated micro-batch experiments for the first time. A direct crystallization from clarified CHO cell culture harvest was inhibited by high salt concentrations. The salt concentration of the harvest was reduced by a simple pretreatment step. The crystallization process from pretreated harvest was successfully transferred to stirred tanks and scaled-up from the mL-scale to the 1 L-scale for the first time. The crystallization yield after 24 h was 88-90%. A high purity of 98.5% was reached after a single recrystallization step. A 17-fold host cell protein reduction was achieved and DNA content was reduced below the detection limit. High biological activity of the therapeutic antibody was maintained during the crystallization, dissolving, and recrystallization steps. Crystallization was also performed with impure solutions from intermediate steps of a standard monoclonal antibody purification process. It was shown that process crystallization has a strong potential to replace Protein A chromatography. Fast dissolution of the crystals was possible. Furthermore, it was shown that crystallization can be used as a concentrating step and can replace several ultra-/diafiltration steps. Molecular modeling suggested that a negative electrostatic region with interspersed exposed hydrophobic residues on the Fv domain of this antibody is responsible for the high crystallization propensity. As a result, process crystallization, following the identification of highly crystallizable antibodies using molecular modeling tools, can be recognized as an efficient, scalable, fast, and inexpensive alternative to key steps of a standard purification process for therapeutic antibodies. Copyright © 2013 Wiley Periodicals, Inc.
Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan
2015-08-05
The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).
NASA Astrophysics Data System (ADS)
Cavanaugh, C.; Gille, J.; Francis, G.; Nardi, B.; Hannigan, J.; McInerney, J.; Krinsky, C.; Barnett, J.; Dean, V.; Craig, C.
2005-12-01
The High Resolution Dynamics Limb Sounder (HIRDLS) instrument onboard the NASA Aura spacecraft experienced a rupture of the thermal blanketing material (Kapton) during the rapid depressurization of launch. The Kapton draped over the HIRDLS scan mirror, severely limiting the aperture through which HIRDLS views space and Earth's atmospheric limb. In order for HIRDLS to achieve its intended measurement goals, rapid characterization of the anomaly, and rapid recovery from it were required. The recovery centered around a new processing module inserted into the standard HIRDLS processing scheme, with a goal of minimizing the effect of the anomaly on the already existing processing modules. We describe the software infrastructure on which the new processing module was built, and how that infrastructure allows for rapid application development and processing response. The scope of the infrastructure spans three distinct anomaly recovery steps and the means for their intercommunication. Each of the three recovery steps (removing the Kapton-induced oscillation in the radiometric signal, removing the Kapton signal contamination upon the radiometric signal, and correcting for the partially-obscured atmospheric view) is completely modularized and insulated from the other steps, allowing focused and rapid application development towards a specific step, and neutralizing unintended inter-step influences, thus greatly shortening the design-development-test lifecycle. The intercommunication is also completely modularized and has a simple interface to which the three recovery steps adhere, allowing easy modification and replacement of specific recovery scenarios, thereby heightening the processing response.
Kang, Junsu; Lee, Donghyeon; Heo, Young Jin; Chung, Wan Kyun
2017-11-07
For highly-integrated microfluidic systems, an actuation system is necessary to control the flow; however, the bulk of actuation devices including pumps or valves has impeded the broad application of integrated microfluidic systems. Here, we suggest a microfluidic process control method based on built-in microfluidic circuits. The circuit is composed of a fluidic timer circuit and a pneumatic logic circuit. The fluidic timer circuit is a serial connection of modularized timer units, which sequentially pass high pressure to the pneumatic logic circuit. The pneumatic logic circuit is a NOR gate array designed to control the liquid-controlling process. By using the timer circuit as a built-in signal generator, multi-step processes could be done totally inside the microchip without any external controller. The timer circuit uses only two valves per unit, and the number of process steps can be extended without limitation by adding timer units. As a demonstration, an automation chip has been designed for a six-step droplet treatment, which entails 1) loading, 2) separation, 3) reagent injection, 4) incubation, 5) clearing and 6) unloading. Each process was successfully performed for a pre-defined step-time without any external control device.
Two-step rapid sulfur capture. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-04-01
The primary goal of this program was to test the technical and economic feasibility of a novel dry sorbent injection process called the Two-Step Rapid Sulfur Capture process for several advanced coal utilization systems. The Two-Step Rapid Sulfur Capture process consists of limestone activation in a high temperature auxiliary burner for short times followed by sorbent quenching in a lower temperature sulfur containing coal combustion gas. The Two-Step Rapid Sulfur Capture process is based on the Non-Equilibrium Sulfur Capture process developed by the Energy Technology Office of Textron Defense Systems (ETO/TDS). Based on the Non-Equilibrium Sulfur Capture studies the rangemore » of conditions for optimum sorbent activation were thought to be: activation temperature > 2,200 K for activation times in the range of 10--30 ms. Therefore, the aim of the Two-Step process is to create a very active sorbent (under conditions similar to the bomb reactor) and complete the sulfur reaction under thermodynamically favorable conditions. A flow facility was designed and assembled to simulate the temperature, time, stoichiometry, and sulfur gas concentration prevalent in the advanced coal utilization systems such as gasifiers, fluidized bed combustors, mixed-metal oxide desulfurization systems, diesel engines, and gas turbines.« less
Parallel workflow tools to facilitate human brain MRI post-processing
Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang
2015-01-01
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043
Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fernando
2004-01-01
The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.
Professional Development Priorities Process (Needs Assessment). Leader's Handbook.
ERIC Educational Resources Information Center
Southeast Idaho Teacher Center Consortium, Twin Falls.
Step-by-step instructions are provided for implementing the Professional Development Priorities Process (PDPP), an educational needs assessment process, by a school faculty member with groups of eight peers or more. The essence of PDPP, which was designed at the Southeast Idaho Teacher Center Consortium, is a dynamic group process in which needs…
42 CFR 50.406 - What are the steps in the process?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false What are the steps in the process? 50.406 Section 50.406 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GRANTS POLICIES OF GENERAL APPLICABILITY Public Health Service Grant Appeals Procedure § 50.406 What are the steps in...
Study on the mechanism of Si-glass-Si two step anodic bonding process
NASA Astrophysics Data System (ADS)
Hu, Lifang; Wang, Hao; Xue, Yongzhi; Shi, Fangrong; Chen, Shaoping
2018-04-01
Si-glass-Si was successfully bonded together through a two-step anodic bonding process. The bonding current in each step of the two-step bonding process was investigated, and found to be quite different. The first bonding current decreased quickly to a relatively small value, but for the second bonding step, there were two current peaks; the current first decreased, then increased, and then decreased again. The second current peak occurred earlier with higher temperature and voltage. The two-step anodic bonding process was investigated in terms of bonding current. SEM and EDS tests were conducted to investigate the interfacial structure of the Si-glass-Si samples. The two bonding interfaces were almost the same, but after an etching process, transitional layers could be found in the bonding interface and a deeper trench with a thickness of ~1.5 µm could be found in the second bonding interface. Atomic force microscopy mapping results indicated that sodium precipitated from the back of the glass, which makes the roughness of the surface become coarse. Tensile tests indicated that the fracture occurred at the glass substrate and that the bonding strength increased with the increment of bonding temperature and voltage with the maximum strength of 6.4 MPa.
Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing
Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun
2016-01-01
Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267
Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing
NASA Astrophysics Data System (ADS)
Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun
2016-07-01
Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps).
Simulation of dynamic processes when machining transition surfaces of stepped shafts
NASA Astrophysics Data System (ADS)
Maksarov, V. V.; Krasnyy, V. A.; Viushin, R. V.
2018-03-01
The paper addresses the characteristics of stepped surfaces of parts categorized as "solids of revolution". It is noted that in the conditions of transition modes during the switch to end surface machining, there is cutting with varied load intensity in the section of the cut layer, which leads to change in cutting force, onset of vibrations, an increase in surface layer roughness, a decrease of size precision, and increased wear of a tool's cutting edge. This work proposes a method that consists in developing a CNC program output code that allows one to process complex forms of stepped shafts with only one machine setup. The authors developed and justified a mathematical model of a technological system for mechanical processing with consideration for the resolution of tool movement at the stages of transition processes to assess the dynamical stability of a system in the process of manufacturing stepped surfaces of parts of “solid of revolution” type.
Drupsteen, Linda; Groeneweg, Jop; Zwetsloot, Gerard I J M
2013-01-01
Many incidents have occurred because organisations have failed to learn from lessons of the past. This means that there is room for improvement in the way organisations analyse incidents, generate measures to remedy identified weaknesses and prevent reoccurrence: the learning from incidents process. To improve that process, it is necessary to gain insight into the steps of this process and to identify factors that hinder learning (bottlenecks). This paper presents a model that enables organisations to analyse the steps in a learning from incidents process and to identify the bottlenecks. The study describes how this model is used in a survey and in 3 exploratory case studies in The Netherlands. The results show that there is limited use of learning potential, especially in the evaluation stage. To improve learning, an approach that considers all steps is necessary.
STEP wastewater treatment: a solar thermal electrochemical process for pollutant oxidation.
Wang, Baohui; Wu, Hongjun; Zhang, Guoxue; Licht, Stuart
2012-10-01
A solar thermal electrochemical production (STEP) pathway was established to utilize solar energy to drive useful chemical processes. In this paper, we use experimental chemistry for efficient STEP wastewater treatment, and suggest a theory based on the decreasing stability of organic pollutants (hydrocarbon oxidation potentials) with increasing temperature. Exemplified by the solar thermal electrochemical oxidation of phenol, the fundamental model and experimental system components of this process outline a general method for the oxidation of environmentally stable organic pollutants into carbon dioxide, which is easily removed. Using thermodynamic calculations we show a sharply decreasing phenol oxidation potential with increasing temperature. The experimental results demonstrate that this increased temperature can be supplied by solar thermal heating. In combination this drives electrochemical phenol removal with enhanced oxidation efficiency through (i) a thermodynamically driven decrease in the energy needed to fuel the process and (ii) improved kinetics to sustain high rates of phenol oxidation at low electrochemical overpotential. The STEP wastewater treatment process is synergistic in that it is performed with higher efficiency than either electrochemical or photovoltaic conversion process acting alone. STEP is a green, efficient, safe, and sustainable process for organic wastewater treatment driven solely by solar energy. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong
2017-07-01
Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.
1992-06-01
civilian providers. Morrill (1987) performed the first step of the marketing effort, which was marketing research . First, Morrill began searching for a...strategy I adopt from them is a marketing research strategy. According to Kotler and Clarke, "The marketing research process consists of five steps...and report presentation" (p. 184). My paper will follow this marketing research process. The first step of this process is covered in the statement of
NASA Astrophysics Data System (ADS)
Nguyen, Duy
2012-07-01
Digital Elevation Models (DEMs) are used in many applications in the context of earth sciences such as in topographic mapping, environmental modeling, rainfall-runoff studies, landslide hazard zonation, seismic source modeling, etc. During the last years multitude of scientific applications of Synthetic Aperture Radar Interferometry (InSAR) techniques have evolved. It has been shown that InSAR is an established technique of generating high quality DEMs from space borne and airborne data, and that it has advantages over other methods for the generation of large area DEM. However, the processing of InSAR data is still a challenging task. This paper describes InSAR operational steps and processing chain for DEM generation from Single Look Complex (SLC) SAR data and compare a satellite SAR estimate of surface elevation with a digital elevation model (DEM) from Topography map. The operational steps are performed in three major stages: Data Search, Data Processing, and product Validation. The Data processing stage is further divided into five steps of Data Pre-Processing, Co-registration, Interferogram generation, Phase unwrapping, and Geocoding. The Data processing steps have been tested with ERS 1/2 data using Delft Object-oriented Interferometric (DORIS) InSAR processing software. Results of the outcome of the application of the described processing steps to real data set are presented.
Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei
2016-11-01
Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.
NASA Astrophysics Data System (ADS)
Matsui, Miyako; Kuwahara, Kenichi
2018-06-01
A cyclic process for highly selective SiO2 etching with atomic-scale precision over Si3N4 was developed by using BCl3 and fluorocarbon gas chemistries. This process consists of two alternately performed steps: a deposition step using BCl3 mixed-gas plasma and an etching step using CF4/Ar mixed-gas plasma. The mechanism of the cyclic process was investigated by analyzing the surface chemistry at each step. BCl x layers formed on both SiO2 and Si3N4 surfaces in the deposition step. Early in the etching step, the deposited BCl x layers reacted with CF x radicals by forming CCl x and BF x . Then, fluorocarbon films were deposited on both surfaces in the etching step. We found that the BCl x layers formed in the deposition step enhanced the formation of the fluorocarbon films in the CF4 plasma etching step. In addition, because F radicals that radiated from the CF4 plasma reacted with B atoms while passing through the BCl x layers, the BCl x layers protected the Si3N4 surface from F-radical etching. The deposited layers, which contained the BCl x , CCl x , and CF x components, became thinner on SiO2 than on Si3N4, which promoted the ion-assisted etching of SiO2. This is because the BCl x component had a high reactivity with SiO2, and the CF x component was consumed by the etching reaction with SiO2.
Do lightning positive leaders really "step"?
NASA Astrophysics Data System (ADS)
Petersen, D.
2015-12-01
It has been known for some time that positive leaders exhibit impulsive charge motion and optical emissions as they extend. However, laboratory and field observations have not produced any evidence of a process analogous to the space leader mechanism of negative leader extension. Instead, observations have suggested that the positive leader tip undergoes a continuous to intermittent series of corona streamer bursts, each burst resulting in a small forward extension of the positive leader channel. Traditionally, it has been held that lightning positive leaders extend in a continuous or quasi-continuous fashion. Lately, however, many have become concerned that this position is incongruous with observations of impulsive activity during lightning positive leader extension. It is increasingly suggested that this impulsive activity is evidence that positive leaders also undergo "stepping". There are two issues that must be addressed. The first issue concerns whether or not the physical processes underlying impulsive extension in negative and positive leaders are distinct. We argue that these processes are in fact physically distinct, and offer new high-speed video evidence to support this position. The second issue regards the proper use of the term "step" as an identifier for the impulsive forward extension of a leader. Traditional use of this term has been applied only to negative leaders, due primarily to their stronger impulsive charge motions and photographic evidence of clearly discontinuous forward progression of the luminous channel. Recently, due to the increasing understanding of the distinct "space leader" process of negative leader extension, the term "step" has increasingly come to be associated with the space leader process itself. Should this emerging association, "step" = space leader attachment, be canonized? If not, then it seems reasonable to use the term "step" to describe impulsive positive leader extension. If, however, we do wish to associate the term "step" with space leader attachment, a process unique to negative leaders, should we devise a term for those process(es) that underly impulsive positive leader extension?
ERIC Educational Resources Information Center
Organization and Human Resources Development Associates, Inc., Austin, TX.
This document outlines the steps in the process of converting military training materials in physician and dental assistant education to competency-based learning modules for use in the civilian sector. Subsections discuss the activity and any problems or issues involved for 14 steps. The 14 steps are as follow: establish liaison to obtain…
ERIC Educational Resources Information Center
Reese, Simon R.
2015-01-01
This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…
Dynamic Emotional Processing in Experiential Therapy: Two Steps Forward, One Step Back
ERIC Educational Resources Information Center
Pascual-Leone, Antonio
2009-01-01
The study of dynamic and nonlinear change has been a valuable development in psychotherapy process research. However, little advancement has been made in describing how moment-by-moment affective processes contribute to larger units of change. The purpose of this study was to examine observable moment-by-moment sequences in emotional processing as…
One step process for producing dense aluminum nitride and composites thereof
Holt, J.B.; Kingman, D.D.; Bianchini, G.M.
1989-10-31
A one step combustion process for the synthesis of dense aluminum nitride compositions is disclosed. The process comprises igniting pure aluminum powder in a nitrogen atmosphere at a pressure of about 1,000 atmospheres or higher. The process enables the production of aluminum nitride bodies to be formed directly in a mold of any desired shape.
One step process for producing dense aluminum nitride and composites thereof
Holt, J. Birch; Kingman, Donald D.; Bianchini, Gregory M.
1989-01-01
A one step combustion process for the synthesis of dense aluminum nitride compositions is disclosed. The process comprises igniting pure aluminum powder in a nitrogen atmosphere at a pressure of about 1000 atmospheres or higher. The process enables the production of aluminum nitride bodies to be formed directly in a mold of any desired shape.
NASA Astrophysics Data System (ADS)
Hosseini, S. M. A.; Baran, I.; Akkerman, R.
2018-05-01
The laser-assisted tape winding (LATW) is an automated process for manufacturing fiber-reinforced thermoplastic tubular products, such as pipes and pressure vessels. Multi-physical phenomena such as heat transfer, mechanical bonding, phase changes and solid mechanics take place during the process. These phenomena need to be understood and described well for an improved product reliability. Temperature is one of the important parameters in this process to control and optimize the product quality which can be employed in an intelligent model-based inline control system. The incoming tape can overlap with the already wounded layer during the process based on the lay-up configuration. In this situation, the incoming tape can step-on or step-off to an already deposited layer/laminate. During the overlapping, the part temperature changes due to the variation of the geometry caused by previously deposited layer, i.e. a bump geometry. In order to qualify the temperature behavior at the bump regions, an experimental set up is designed on a flat laminate. Artificial bumps/steps are formed on the laminate with various thicknesses and fiber orientations. As the laser head experiences the step-on and step-off, the IR (Infra-Red) camera and the embedded thermocouples measure the temperature on the surface and inside the laminate, respectively. During the step-on, a small drop in temperature is observed while in step-off a higher peak in temperature is observed. It can be concluded that the change in the temperature during overlapping is due to the change in laser incident angle made by the bump geometry. The effect of the step thickness on the temperature peak is quantified and found to be significant.
Conceptual analysis of Physiology of vision in Ayurveda
Balakrishnan, Praveen; Ashwini, M. J.
2014-01-01
The process by which the world outside is seen is termed as visual process or physiology of vision. There are three phases in this visual process: phase of refraction of light, phase of conversion of light energy into electrical impulse and finally peripheral and central neurophysiology. With the advent of modern instruments step by step biochemical changes occurring at each level of the visual process has been deciphered. Many investigations have emerged to track these changes and helping to diagnose the exact nature of the disease. Ayurveda has described this physiology of vision based on the functions of vata and pitta. Philosophical textbook of ayurveda, Tarka Sangraha, gives certain basics facts of visual process. This article discusses the second and third phase of visual process. Step by step analysis of the visual process through the spectacles of ayurveda amalgamated with the basics of philosophy from Tarka Sangraha has been analyzed critically to generate a concrete idea regarding the physiology and hence thereby interpret the pathology on the grounds of ayurveda based on the investigative reports. PMID:25336853
... Supplements Peer Review Process Review Committees Application Support Library Clinical Research Next Steps Pre-Funding: After Review Terms of ... Supplements Peer Review Process Review Committees Application Support Library Clinical Research Next Steps Pre-Funding: After Review Terms of ...
... Supplements Peer Review Process Review Committees Application Support Library Clinical Research Next Steps Pre-Funding: After Review Terms of ... Supplements Peer Review Process Review Committees Application Support Library Clinical Research Next Steps Pre-Funding: After Review Terms of ...
Benign Essential Blepharospasm
... Supplements Peer Review Process Review Committees Application Support Library Clinical Research Next Steps Pre-Funding: After Review Terms of ... Supplements Peer Review Process Review Committees Application Support Library Clinical Research Next Steps Pre-Funding: After Review Terms of ...
NASA Astrophysics Data System (ADS)
Omiya, Takuma; Tanaka, Akira; Shimomura, Masaru
2012-07-01
The structure of porous silicon carbide membranes that peeled off spontaneously during electrochemical etching was studied. They were fabricated from n-type 6H SiC(0001) wafers by a double-step electrochemical etching process in a hydrofluoric electrolyte. Nanoporous membranes were obtained after double-step etching with current densities of 10-20 and 60-100 mA/cm2 in the first and second steps, respectively. Microporous membranes were also fabricated after double-step etching with current densities of 100 and 200 mA/cm2. It was found that the pore diameter is influenced by the etching current in step 1, and that a higher current is required in step 2 when the current in step 1 is increased. During the etching processes in steps 1 and 2, vertical nanopore and lateral crack formations proceed, respectively. The influx pathway of hydrofluoric solution, expansion of generated gases, and transfer limitation of positive holes to the pore surface are the key factors in the peeling-off mechanism of the membrane.
Working through. A process of restitution.
Gottesman, D M
A number of authors, including Freud, have written about the process of working through but have left unsettled what is actually involved. I have attempted to outline the step-by-step process of working through, starting with recollection and repetition and ending with restitution and resolution. I have introduced the term restitution in order to give more importance to an already existing step in the working-throught process; it should not be looked upon as an artificial device. Restitution allows the patient to find appropriate gratification in present reality, and this helps him to relinquish the past. Rather than allowing the patient to "wallow in the muck of guilt," as Eveoleen Rexford suggests society "wallows" in its inability to help its children, restitution gives appropriate direction for change. It is a natural step in the successful resolution of treatment.
Kelley, Brian D; Tannatt, Molly; Magnusson, Robert; Hagelberg, Sigrid; Booth, James
2004-08-05
An affinity chromatography step was developed for purification of recombinant B-Domain Deleted Factor VIII (BDDrFVIII) using a peptide ligand selected from a phage display library. The peptide library had variegated residues, contained both within a disulfide bond-constrained ring and flanking the ring. The peptide ligand binds to BDDrFVIII with a dissociation constant of approximately 1 microM both in free solution and when immobilized on a chromatographic resin. The peptide is chemically synthesized and the affinity resin is produced by coupling the peptide to an agarose matrix preactivated with N-hydroxysuccinimide. Coupling conditions were optimized to give consistent and complete ligand incorporation and validated with a robustness study that tested various combinations of processing limits. The peptide affinity chromatographic operation employs conditions very similar to an immunoaffinity chromatography step currently in use for BDDrFVIII manufacture. The process step provides excellent recovery of BDDrFVIII from a complex feed stream and reduces host cell protein and DNA by 3-4 logs. Process validation studies established resin reuse over 26 cycles without changes in product recovery or purity. A robustness study using a factorial design was performed and showed that the step was insensitive to small changes in process conditions that represent normal variation in commercial manufacturing. A scaled-down model of the process step was qualified and used for virus removal studies. A validation package addressing the safety of the leached peptide included leaching rate measurements under process conditions, testing of peptide levels in product pools, demonstration of robust removal downstream by spiking studies, end product testing, and toxicological profiling of the ligand. The peptide ligand affinity step was scaled up for cGMP production of BDDrFVIII for clinical trials.
20 CFR 404.1520 - Evaluation of disability in general.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-step sequential evaluation process we use to decide whether you are disabled, as defined in § 404.1505...-step sequential evaluation process. The sequential evaluation process is a series of five “steps” that... severe medically determinable physical or mental impairment that meets the duration requirement in § 404...
Navigating the Grad School Application Process: A Training Schedule
ERIC Educational Resources Information Center
Swindlehurst, Garrett R.; Bullard, Lisa G.
2014-01-01
Through a simple step-by-step guide for navigating the graduate school application process, a graduate student who's been through the ringer and a faculty advisor who knows the ropes offer advice to walk prospective grad students through the process of successfully entering graduate school. A repeat printing.
NASA Technical Reports Server (NTRS)
Himmel, R. P.
1975-01-01
Hybrid processes, handling procedures, and materials were examined to identify the critical process steps in which contamination is most likely to occur, to identify the particular contaminants associated with these critical steps, and to propose method for the control of these contaminants.
Developing a Methodology for Designing Systems of Instruction.
ERIC Educational Resources Information Center
Carpenter, Polly
This report presents a description of a process for instructional system design, identification of the steps in the design process, and determination of their sequence and interrelationships. As currently envisioned, several interrelated steps must be taken, five of which provide the inputs to the final design process. There are analysis of…
Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.
2012-01-01
Background and Aims The involvement of two steps in the physical dormancy (PY)-breaking process previously has been demonstrated in seeds of Fabaceae and Convolvulaceae. Even though there is a claim for a moisture-controlled stepwise PY-breaking in some species of Geraniaceae, no study has evaluated the role of temperature in the PY-breaking process in this family. The aim of this study was to determine whether a temperature-controlled stepwise PY-breaking process occurs in seeds of the winter annuals Geranium carolinianum and G. dissectum. Methods Seeds of G. carolinianum and G. dissectum were stored under different temperature regimes to test the effect of storage temperature on PY-break. The role of temperature and moisture regimes in regulating PY-break was investigated by treatments simulating natural conditions. Greenhouse (non-heated) experiments on seed germination and burial experiments (outdoors) were carried out to determine the PY-breaking behaviour in the natural habitat. Key Results Irrespective of moisture conditions, sensitivity to the PY-breaking step in seeds of G. carolinianum was induced at temperatures ≥20 °C, and exposure to temperatures ≤20 °C made the sensitive seeds permeable. Sensitivity of seeds increased with time. In G. dissectum, PY-break occurred at temperatures ≥20 °C in a single step under constant wet or dry conditions and in two steps under alternate wet–dry conditions if seeds were initially kept wet. Conclusions Timing of seed germination with the onset of autumn can be explained by PY-breaking processes involving (a) two temperature-dependent steps in G. carolinianum and (b) one or two moisture-dependent step(s) along with the inability to germinate under high temperatures in G. dissectum. Geraniaceae is the third of 18 families with PY in which a two-step PY-breaking process has been demonstrated. PMID:22684684
NASA Astrophysics Data System (ADS)
Santi, S. S.; Renanto; Altway, A.
2018-01-01
The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.
Thermal quenching effect of an infrared deep level in Mg-doped p-type GaN films
NASA Astrophysics Data System (ADS)
Kim, Keunjoo; Chung, Sang Jo
2002-03-01
The thermal quenching of an infrared deep level of 1.2-1.5 eV has been investigated on Mg-doped p-type GaN films, using one- and two-step annealing processes and photocurrent measurements. The deep level appeared in the one-step annealing process at a relatively high temperature of 900 °C, but disappeared in the two-step annealing process with a low-temperature step and a subsequent high-temperature step. The persistent photocurrent was residual in the sample including the deep level, while it was terminated in the sample without the deep level. This indicates that the deep level is a neutral hole center located above a quasi-Fermi level, estimated with an energy of EpF=0.1-0.15 eV above the valence band at a hole carrier concentration of 2.0-2.5×1017/cm3.
Process for the combined removal of SO.sub.2 and NO.sub.x from flue gas
Chang, Shih-Ger; Liu, David K.; Griffiths, Elizabeth A.; Littlejohn, David
1988-01-01
The present invention in one aspect relates to a process for the simultaneous removal of NO.sub.x and SO.sub.2 from a fluid stream comprising mixtures thereof and in another aspect relates to the separation, use and/or regeneration of various chemicals contaminated or spent in the process and which includes the steps of: (A) contacting the fluid stream at a temperature of between about 105.degree. and 180.degree. C. with a liquid aqueous slurry or solution comprising an effective amount of an iron chelate of an amino acid moiety having at least one --SH group; (B) separating the fluid stream from the particulates formed in step (A) comprising the chelate of the amino acid moiety and fly ash; (C) washing and separating the particulates of step (B) with an aqueous solution having a pH value of between about 5 to 8; (D) subsequently washing and separating the particulates of step (C) with a strongly acidic aqueous solution having a pH value of between about 1 to 3; (E) washing and separating the particulates of step (D) with an basic aqueous solution having a pH value of between about 9 to 12; (F) optionally adding additional amino acid moiety, iron (II) and alkali to the aqueous liquid from step (D) to produce an aqueous solution or slurry similar to that in step (A) having a pH value of between about 4 to 12; and (G) recycling the aqueous slurry of step (F) to the contacting zone of step (A). Steps (D) and (E) can be carried out in the reverse sequence, however the preferred order is (D) and then (E). In another preferred embodiment the present invention provides a process for the removal of NO.sub.x, SO.sub.2 and particulates from a fluid stream which includes the steps of (A) injecting into a reaction zone an aqueous solution itself comprising (i) an amino acid moiety selected from those described above; (ii) iron (II) ion; and (iii) an alkali, wherein the aqueous solution has a pH of between about 4 and 11; followed by solids separation and washing as is described in steps (B), (C), (D) and (E) above. The overall process is useful to reduce acid rain components from combustion gas sources.
Fassbender, Alex G.
1995-01-01
The invention greatly reduces the amount of ammonia in sewage plant effluent. The process of the invention has three main steps. The first step is dewatering without first digesting, thereby producing a first ammonia-containing stream having a low concentration of ammonia, and a second solids-containing stream. The second step is sending the second solids-containing stream through a means for separating the solids from the liquid and producing an aqueous stream containing a high concentration of ammonia. The third step is removal of ammonia from the aqueous stream using a hydrothermal process.
Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta
2014-01-01
Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.
ERIC Educational Resources Information Center
Deal, Gerald A.; Montgomery, James A.
This guide describes standard operating job procedures for the screening and grinding process of wastewater treatment facilities. The objective of this process is the removal of coarse materials from the raw waste stream for the protection of subsequent equipment and processes. The guide gives step-by-step instructions for safety inspection,…
ERIC Educational Resources Information Center
Schwing, Carl M.
This guide describes standard operating job procedures for the screening and grinding process of wastewater treatment facilities. The objective of this process is the removal of coarse materials from the raw waste stream for the protection of subsequent equipment and processes. The guide gives step-by-step instructions for safety inspection,…
Non-cellulosic polysaccharides from cotton fibre are differently impacted by textile processing.
Runavot, Jean-Luc; Guo, Xiaoyuan; Willats, William G T; Knox, J Paul; Goubet, Florence; Meulewaeter, Frank
2014-01-01
Cotton fibre is mainly composed of cellulose, although non-cellulosic polysaccharides play key roles during fibre development and are still present in the harvested fibre. This study aimed at determining the fate of non-cellulosic polysaccharides during cotton textile processing. We analyzed non-cellulosic cotton fibre polysaccharides during different steps of cotton textile processing using GC-MS, HPLC and comprehensive microarray polymer profiling to obtain monosaccharide and polysaccharide amounts and linkage compositions. Additionally, in situ detection was used to obtain information on polysaccharide localization and accessibility. We show that pectic and hemicellulosic polysaccharide levels decrease during cotton textile processing and that some processing steps have more impact than others. Pectins and arabinose-containing polysaccharides are strongly impacted by the chemical treatments, with most being removed during bleaching and scouring. However, some forms of pectin are more resistant than others. Xylan and xyloglucan are affected in later processing steps and to a lesser extent, whereas callose showed a strong resistance to the chemical processing steps. This study shows that non-cellulosic polysaccharides are differently impacted by the treatments used in cotton textile processing with some hemicelluloses and callose being resistant to these harsh treatments.
Practices to enable the geophysical research spectrum: from fundamentals to applications
NASA Astrophysics Data System (ADS)
Kang, S.; Cockett, R.; Heagy, L. J.; Oldenburg, D.
2016-12-01
In a geophysical survey, a source injects energy into the earth and a response is measured. These physical systems are governed by partial differential equations and their numerical solutions are obtained by discretizing the earth. Geophysical simulations and inversions are tools for understanding physical responses and constructing models of the subsurface given a finite amount of data. SimPEG (http://simpeg.xyz) is our effort to synthesize geophysical forward and inverse methodologies into a consistent framework. The primary focus of our initial development has been on the electromagnetics (EM) package, with recent extensions to magnetotelluric, direct current (DC), and induced polarization. Across these methods, and applied geophysics in general, we require tools to explore and build an understanding of the physics (behaviour of fields, fluxes), and work with data to produce models through reproducible inversions. If we consider DC or EM experiments, with the aim of understanding responses from subsurface conductors, we require resources that provide multiple "entry points" into the geophysical problem. To understand the physical responses and measured data, we must simulate the physical system and visualize electric fields, currents, and charges. Performing an inversion requires that many moving pieces be brought together: simulation, physics, linear algebra, data processing, optimization, etc. Each component must be trusted, accessible to interrogation and manipulation, and readily combined in order to enable investigation into inversion methodologies. To support such research, we not only require "entry points" into the software, but also extensibility to new situations. In our development of SimPEG, we have sought to use leading practices in software development with the aim of supporting and promoting collaborations across a spectrum of geophysical research: from fundamentals to applications. Designing software to enable this spectrum puts unique constraints on both the architecture of the codebase as well as the development practices that are employed. In this presentation, we will share some lessons learned and, in particular, how our prioritization of testing, documentation, and refactoring has impacted our own research and fostered collaborations.
Seismic Canvas: Evolution as a Data Exploration and Analysis Tool
NASA Astrophysics Data System (ADS)
Kroeger, G. C.
2015-12-01
SeismicCanvas, originally developed as a prototype interactive waveform display and printing application for educational use has evolved to include significant data exploration and analysis functionality. The most recent version supports data import from a variety of standard file formats including SAC and mini-SEED, as well as search and download capabilities via IRIS/FDSN Web Services. Data processing tools now include removal of means and trends, interactive windowing, filtering, smoothing, tapering, resampling. Waveforms can be displayed in a free-form canvas or as a record section based on angular or great circle distance, azimuth or back azimuth. Integrated tau-p code allows the calculation and display of theoretical phase arrivals from a variety of radial Earth models. Waveforms can be aligned by absolute time, event time, picked or theoretical arrival times and can be stacked after alignment. Interactive measurements include means, amplitudes, time delays, ray parameters and apparent velocities. Interactive picking of an arbitrary list of seismic phases is supported. Bode plots of amplitude and phase spectra and spectrograms can be created from multiple seismograms or selected windows of seismograms. Direct printing is implemented on all supported platforms along with output of high-resolution pdf files. With these added capabilities, the application is now being used as a data exploration tool for research. Coded in C++ and using the cross-platform Qt framework, the most recent version is available as a 64-bit application for Windows 7-10, Mac OS X 10.6-10.11, and most distributions of Linux, and a 32-bit version for Windows XP and 7. With the latest improvements and refactoring of trace display classes, the 64-bit versions have been tested with over 250 million samples and remain responsive in interactive operations. The source code is available under a LPGLv3 license and both source and executables are available through the IRIS SeisCode repository.
Bumper 3 Update for IADC Protection Manual
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim
2016-01-01
The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note that the Bumper II program is no longer maintained or distributed by NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.
2017-10-20
Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less
ESA's Planetary Science Archive: Preserve and present reliable scientific data sets
NASA Astrophysics Data System (ADS)
Besse, S.; Vallat, C.; Barthelemy, M.; Coia, D.; Costa, M.; De Marchi, G.; Fraga, D.; Grotheer, E.; Heather, D.; Lim, T.; Martinez, S.; Arviset, C.; Barbarisi, I.; Docasal, R.; Macfarlane, A.; Rios, C.; Saiz, J.; Vallejo, F.
2018-01-01
The European Space Agency (ESA) Planetary Science Archive (PSA) is undergoing a significant refactoring of all its components to improve the services provided to the scientific community and the public. The PSA supports ESA's missions exploring the Solar System by archiving scientific peer-reviewed observations as well as engineering data sets. This includes the Giotto, SMART-1, Huygens, Venus Express, Mars Express, Rosetta, Exomars 2016, Exomars RSP, BepiColombo, and JUICE missions. The PSA is offering a newly designed graphical user interface which is simultaneously meant to maximize the interaction with scientific observations and also minimise the efforts needed to download these scientific observations. The PSA still offers the same services as before (i.e., FTP, documentation, helpdesk, etc.). In addition, it will support the two formats of the Planetary Data System (i.e., PDS3 and PDS4), as well as providing new ways for searching the data products with specific metadata and geometrical parameters. As well as enhanced services, the PSA will also provide new services to improve the visualisation of data products and scientific content (e.g., spectra, etc.). Together with improved access to the spacecraft engineering data sets, the PSA will provide easier access to scientific data products that will help to maximize the science return of ESA's space missions.
Offline detection of broken rotor bars in AC induction motors
NASA Astrophysics Data System (ADS)
Powers, Craig Stephen
ABSTRACT. OFFLINE DETECTION OF BROKEN ROTOR BARS IN AC INDUCTION MOTORS. The detection of the broken rotor bar defect in medium- and large-sized AC induction machines is currently one of the most difficult tasks for the motor condition and monitoring industry. If a broken rotor bar defect goes undetected, it can cause a catastrophic failure of an expensive machine. If a broken rotor bar defect is falsely determined, it wastes time and money to physically tear down and inspect the machine only to find an incorrect diagnosis. Previous work in 2009 at Baker/SKF-USA in collaboration with the Korea University has developed a prototype instrument that has been highly successful in correctly detecting the broken rotor bar defect in ACIMs where other methods have failed. Dr. Sang Bin and his students at the Korea University have been using this prototype instrument to help the industry save money in the successful detection of the BRB defect. A review of the current state of motor conditioning and monitoring technology for detecting the broken rotor bar defect in ACIMs shows improved detection of this fault is still relevant. An analysis of previous work in the creation of this prototype instrument leads into the refactoring of the software and hardware into something more deployable, cost effective and commercially viable.
Use of proteomics for validation of the isolation process of clotting factor IX from human plasma.
Clifton, James; Huang, Feilei; Gaso-Sokac, Dajana; Brilliant, Kate; Hixson, Douglas; Josic, Djuro
2010-01-03
The use of proteomic techniques in the monitoring of different production steps of plasma-derived clotting factor IX (pd F IX) was demonstrated. The first step, solid-phase extraction with a weak anion-exchange resin, fractionates the bulk of human serum albumin (HSA), immunoglobulin G, and other non-binding proteins from F IX. The proteins that strongly bind to the anion-exchange resin are eluted by higher salt concentrations. In the second step, anion-exchange chromatography, residual HSA, some proteases and other contaminating proteins are separated. In the last chromatographic step, affinity chromatography with immobilized heparin, the majority of the residual impurities are removed. However, some contaminating proteins still remain in the eluate from the affinity column. The next step in the production process, virus filtration, is also an efficient step for the removal of residual impurities, mainly high molecular weight proteins, such as vitronectin and inter-alpha inhibitor proteins. In each production step, the active component, pd F IX and contaminating proteins are monitored by biochemical and immunochemical methods and by LC-MS/MS and their removal documented. Our methodology is very helpful for further process optimization, rapid identification of target proteins with relatively low abundance, and for the design of subsequent steps for their removal or purification.
Volume holographic elements in Kodak 131 plates processed with SHSG method
NASA Astrophysics Data System (ADS)
Collados, Manuel V.; Atencia, Jesus; Lopez, Ana M.; Quintanilla, Manuel M.
2001-08-01
A SHSG procedure to register volume phase holograms in Kodak 131 plates is presented. We analyze the influence on the diffraction efficiency of the developing step and the temperature of the bleaching bath of usual SHSG processes. Applying a simple 12 steps process to form phase transmission holograms developing with D-19, bleaching with R-10 at 70 degrees C and removing the sensitizing dyes that remain in the emulsion with a diluted methanol bath after the fixation step, we obtain relative efficiencies of 100 percent and effective efficiencies of 70 percent.
Secretory immunoglobulin purification from whey by chromatographic techniques.
Matlschweiger, Alexander; Engelmaier, Hannah; Himmler, Gottfried; Hahn, Rainer
2017-08-15
Secretory immunoglobulins (SIg) are a major fraction of the mucosal immune system and represent potential drug candidates. So far, platform technologies for their purification do not exist. SIg from animal whey was used as a model to develop a simple, efficient and potentially generic chromatographic purification process. Several chromatographic stationary phases were tested. A combination of two anion-exchange steps resulted in the highest purity. The key step was the use of a small-porous anion exchanger operated in flow-through mode. Diffusion of SIg into the resin particles was significantly hindered, while the main impurities, IgG and serum albumin, were bound. In this step, initial purity was increased from 66% to 89% with a step yield of 88%. In a second anion-exchange step using giga-porous material, SIg was captured and purified by step or linear gradient elution to obtain fractions with purities >95%. For the step gradient elution step yield of highly pure SIg was 54%. Elution of SIgA and SIgM with a linear gradient resulted in a step yield of 56% and 35%, respectively. Overall yields for both anion exchange steps were 43% for the combination of flow-through and step elution mode. Combination of flow-through and linear gradient elution mode resulted in a yield of 44% for SIgA and 39% for SIgM. The proposed process allows the purification of biologically active SIg from animal whey in preparative scale. For future applications, the process can easily be adopted for purification of recombinant secretory immunoglobulin species. Copyright © 2017 Elsevier B.V. All rights reserved.
Methods and systems for detection of radionuclides
Coates, Jr., John T.; DeVol, Timothy A.
2010-05-25
Disclosed are materials and systems useful in determining the existence of radionuclides in an aqueous sample. The materials provide the dual function of both extraction and scintillation to the systems. The systems can be both portable and simple to use, and as such can beneficially be utilized to determine presence and optionally concentration of radionuclide contamination in an aqueous sample at any desired location and according to a relatively simple process without the necessity of complicated sample handling techniques. The disclosed systems include a one-step process, providing simultaneous extraction and detection capability, and a two-step process, providing a first extraction step that can be carried out in a remote field location, followed by a second detection step that can be carried out in a different location.
iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM
Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.
2011-01-01
iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445
A Three-Step Atomic Layer Deposition Process for SiN x Using Si2Cl6, CH3NH2, and N2 Plasma.
Ovanesyan, Rafaiel A; Hausmann, Dennis M; Agarwal, Sumit
2018-06-06
We report a novel three-step SiN x atomic layer deposition (ALD) process using Si 2 Cl 6 , CH 3 NH 2 , and N 2 plasma. In a two-step process, nonhydrogenated chlorosilanes such as Si 2 Cl 6 with N 2 plasmas lead to poor-quality SiN x films that oxidize rapidly. The intermediate CH 3 NH 2 step was therefore introduced in the ALD cycle to replace the NH 3 plasma step with a N 2 plasma, while using Si 2 Cl 6 as the Si precursor. This three-step process lowers the atomic H content and improves the film conformality on high-aspect-ratio nanostructures as Si-N-Si bonds are formed during a thermal CH 3 NH 2 step in addition to the N 2 plasma step. During ALD, the reactive surface sites were monitored using in situ surface infrared spectroscopy. Our infrared spectra show that, on the post-N 2 plasma-treated SiN x surface, Si 2 Cl 6 reacts primarily with the surface -NH 2 species to form surface -SiCl x ( x = 1, 2, or 3) bonds, which are the reactive sites during the CH 3 NH 2 cycle. In the N 2 plasma step, reactive -NH 2 surface species are created because of the surface H available from the -CH 3 groups. At 400 °C, the SiN x films have a growth per cycle of ∼0.9 Å with ∼12 atomic percent H. The films grown on high-aspect-ratio nanostructures have a conformality of ∼90%.
Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L
2012-11-01
Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.
Optimum processing of mammographic film.
Sprawls, P; Kitts, E L
1996-03-01
Underprocessing of mammographic film can result in reduced contrast and visibility of breast structures and an unnecessary increase in radiation dose to the patient. Underprocessing can be caused by physical factors (low developer temperature, inadequate development time, insufficient developer agitation) or chemical factors (developer not optimized for film type; overdiluted, underreplenished, contaminated, or frequently changed developer). Conventional quality control programs are designed to produce consistent processing but do not address the issue of optimum processing. Optimum processing is defined as the level of processing that produces the film performance characteristics (contrast and sensitivity) specified by the film manufacturer. Optimum processing of mammographic film can be achieved by following a two-step protocol. The first step is to set up the processing conditions according to recommendations from the film and developer chemistry manufacturers. The second step is to verify the processing results by comparing them with sensitometric data provided by the film manufacturer.
Solar kerosene from H2O and CO2
NASA Astrophysics Data System (ADS)
Furler, P.; Marxer, D.; Scheffe, J.; Reinalda, D.; Geerlings, H.; Falter, C.; Batteiger, V.; Sizmann, A.; Steinfeld, A.
2017-06-01
The entire production chain for renewable kerosene obtained directly from sunlight, H2O, and CO2 is experimentally demonstrated. The key component of the production process is a high-temperature solar reactor containing a reticulated porous ceramic (RPC) structure made of ceria, which enables the splitting of H2O and CO2 via a 2-step thermochemical redox cycle. In the 1st reduction step, ceria is endo-thermally reduced using concentrated solar radiation as the energy source of process heat. In the 2nd oxidation step, nonstoichiometric ceria reacts with H2O and CO2 to form H2 and CO - syngas - which is finally converted into kerosene by the Fischer-Tropsch process. The RPC featured dual-scale porosity for enhanced heat and mass transfer: mm-size pores for volumetric radiation absorption during the reduction step and μm-size pores within its struts for fast kinetics during the oxidation step. We report on the engineering design of the solar reactor and the experimental demonstration of over 290 consecutive redox cycles for producing high-quality syngas suitable for the processing of liquid hydrocarbon fuels.
Sensorimotor and Cognitive Predictors of Impaired Gait Adaptability in Older People.
Caetano, Maria Joana D; Menant, Jasmine C; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Lord, Stephen R
2017-09-01
The ability to adapt gait when negotiating unexpected hazards is crucial to maintain stability and avoid falling. This study investigated whether impaired gait adaptability in a task including obstacle and stepping targets is associated with cognitive and sensorimotor capacities in older adults. Fifty healthy older adults (74±7 years) were instructed to either (a) avoid an obstacle at usual step distance or (b) step onto a target at either a short or long step distance projected on a walkway two heel strikes ahead and then continue walking. Participants also completed cognitive and sensorimotor function assessments. Stroop test and reaction time performance significantly discriminated between participants who did and did not make stepping errors, and poorer Trail-Making test performance predicted shorter penultimate step length in the obstacle avoidance condition. Slower reaction time predicted poorer stepping accuracy; increased postural sway, weaker quadriceps strength, and poorer Stroop and Trail-Making test performances predicted increased number of steps taken to approach the target/obstacle and shorter step length; and increased postural sway and higher concern about falling predicted slower step velocity. Superior executive function, fast processing speed, and good muscle strength and balance were all associated with successful gait adaptability. Processing speed appears particularly important for precise foot placements; cognitive capacity for step length adjustments; and early and/or additional cognitive processing involving the inhibition of a stepping pattern for obstacle avoidance. This information may facilitate fall risk assessments and fall prevention strategies. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Key Steps in the Special Review Process
EPA uses this process when it has reason to believe that the use of a pesticide may result in unreasonable adverse effects on people or the environment. Steps include comprehensive risk and benefit analyses and multiple Position Documents.
An intraorganizational model for developing and spreading quality improvement innovations.
Kellogg, Katherine C; Gainer, Lindsay A; Allen, Adrienne S; OʼSullivan, Tatum; Singer, Sara J
Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group's traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread.
An intraorganizational model for developing and spreading quality improvement innovations
Kellogg, Katherine C.; Gainer, Lindsay A.; Allen, Adrienne S.; O'Sullivan, Tatum; Singer, Sara J.
2017-01-01
Background: Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. Purpose: We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. Methodology/Approach: We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group’s traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. Findings: The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Practice Implications: Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread. PMID:27428788
Technical Writing: Process and Product. Third Edition.
ERIC Educational Resources Information Center
Gerson, Sharon J.; Gerson, Steven M.
This book guides students through the entire writing process--prewriting, writing, and rewriting--developing an easy-to-use, step-by-step technique for writing the types of documents they will encounter on the job. It engages students in the writing process and encourages hands-on application as well as discussions about ethics, audience…
Community Education: A Community Planning Process Guide.
ERIC Educational Resources Information Center
Wiglesworth, Bill, Comp.
Designed to assist in the planning of community education and services, this booklet offers an argument in support of as well as step-by-step implementation instructions for a 2-day planning process. Following a discussion of the advantages of cooperative planning, the community planning process is outlined. Examined next are the reasons why a…
Introducing the "Decider" Design Process
ERIC Educational Resources Information Center
Prasa, Anthony R., Jr.; Del Guercio, Ryan
2016-01-01
Engineers are faced with solving important problems every day and must follow a step-by-step design process to arrive at solutions. Students who are taught an effective design process to apply to engineering projects begin to see problems as an engineer would, consider all ideas, and arrive at the best solution. Using an effective design process…
Step by Step: Avoiding Spiritual Bypass in 12-Step Work
ERIC Educational Resources Information Center
Cashwell, Craig S.; Clarke, Philip B.; Graves, Elizabeth G.
2009-01-01
With spirituality as a cornerstone, 12-step groups serve a vital role in the recovery community. It is important for counselors to be mindful, however, of the potential for clients to be in spiritual bypass, which likely will undermine the recovery process.
Suggested Steps for Planning and Building a New School Building.
ERIC Educational Resources Information Center
Oregon State Board of Education, Salem.
Many school board members are inexperienced in the construction process and unaware of the steps to be taken in school building construction. For this reason, this step-by-step outline attempts in a few short paragraphs under each step in the planning, bonding, and building stages to offer suggestions and advice to the school board members.…
Ren, Xiaojun; Deng, Ruijie; Wang, Lida; Zhang, Kaixiang
2017-01-01
RNA splicing, which mainly involves two transesterification steps, is a fundamental process of gene expression and its abnormal regulation contributes to serious genetic diseases. Antisense oligonucleotides (ASOs) are genetic control tools that can be used to specifically control genes through alteration of the RNA splicing pathway. Despite intensive research, how ASOs or various other factors influence the multiple processes of RNA splicing still remains obscure. This is largely due to an inability to analyze the splicing efficiency of each step in the RNA splicing process with high sensitivity. We addressed this limitation by introducing a padlock probe-based isothermal amplification assay to achieve quantification of the specific products in different splicing steps. With this amplified assay, the roles that ASOs play in RNA splicing inhibition in the first and second steps could be distinguished. We identified that 5′-ASO could block RNA splicing by inhibiting the first step, while 3′-ASO could block RNA splicing by inhibiting the second step. This method provides a versatile tool for assisting efficient ASO design and discovering new splicing modulators and therapeutic drugs. PMID:28989608
ERIC Educational Resources Information Center
West, Alfred W.
This is the third in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals with the calculation procedures associated with a step-feed process. Illustrations and examples are included to…
Hybrid sulfur cycle operation for high-temperature gas-cooled reactors
Gorensek, Maximilian B
2015-02-17
A hybrid sulfur (HyS) cycle process for the production of hydrogen is provided. The process uses a proton exchange membrane (PEM) SO.sub.2-depolarized electrolyzer (SDE) for the low-temperature, electrochemical reaction step and a bayonet reactor for the high-temperature decomposition step The process can be operated at lower temperature and pressure ranges while still providing an overall energy efficient cycle process.
De Craemer, Marieke; Verloigne, Maïté; De Bourdeaudhuij, Ilse; Androutsos, Odysseas; Iotova, Violeta; Moreno, Luis; Koletzko, Berthold; Socha, Piotr; Manios, Yannis; Cardon, Greet
2017-08-29
The ToyBox-intervention is a theory- and evidence-based intervention delivered in kindergartens to improve four- to six-year-old children's energy balance-related behaviours and prevent obesity. The current study aimed to (1) examine the effect of the ToyBox-intervention on increasing European four- to six-year-old children' steps per day, and (2) examine if a higher process evaluation score from teachers and parents was related to a more favourable effect on steps per day. A sample of 2438 four- to six-year-old children (51.9% boys, mean age 4.75 ± 0.43 years) from 6 European countries (Belgium, Bulgaria, Germany, Greece, Poland and Spain) wore a motion sensor (pedometer or accelerometer) for a minimum of two weekdays and one weekend day both at baseline and follow-up to objectively measure their steps per day. Kindergarten teachers implemented the physical activity component of the ToyBox-intervention for 6 weeks in total, with a focus on (1) environmental changes in the classroom, (2) the child performing the actual behaviour and (3) classroom activities. Children's parents received newsletters, tip cards and posters. To assess intervention effects, multilevel repeated measures analyses were conducted for the total sample and the six intervention countries separately. In addition, process evaluation questionnaires were used to calculate a total process evaluation score (with implementation and satisfaction as a part of the overall score) for teachers and parents which was then linked with the physical activity outcomes. No significant intervention effects on four- to six-year-old children' steps per weekday, steps per weekend day and steps per average day were found, both in the total sample and in the country-specific samples (all p > 0.05). In general, the intervention effects on steps per day were least favourable in four- to six-year-old children with a low teachers process evaluation score and most favourable in four- to six-year-old children with a high teachers process evaluation score. No differences in intervention effects were found for a low, medium or high parents' process evaluation score. The physical activity component of the ToyBox-intervention had no overall effect on four- to six-year-old children' steps per day. However, the process evaluation scores showed that kindergarten teachers that implemented the physical activity component of the ToyBox-intervention as planned and were satisfied with the physical activity component led to favourable effects on children's steps per day. Strategies to motivate, actively involve and engage the kindergarten teachers and parents/caregivers are needed to induce larger effects.
Dietary Screener Questionnaire in the NHIS CCS 2015: Data Processing and Scoring Procedures
Our NCI research team followed several steps to formulate the Dietary Screener Questionnaire (DSQ) scoring algorithms. These steps are described for researchers who may be interested in the methodologic process our team used.
Dietary Screener Questionnaire in the NHIS CCS 2010: Data Processing and Scoring Procedures
Our NCI research team followed several steps to formulate the Dietary Screener Questionnaire (DSQ) scoring algorithms. These steps are described for researchers who may be interested in the methodologic process our team used.
Self-assembly and continuous growth of hexagonal graphene flakes on liquid Cu
NASA Astrophysics Data System (ADS)
Cho, Seong-Yong; Kim, Min-Sik; Kim, Minsu; Kim, Ki-Ju; Kim, Hyun-Mi; Lee, Do-Joong; Lee, Sang-Hoon; Kim, Ki-Bum
2015-07-01
Graphene growth on liquid Cu has received great interest, owing to the self-assembly behavior of hexagonal graphene flakes with aligned orientation and to the possibility of forming a single grain of graphene through a commensurate growth of these graphene flakes. Here, we propose and demonstrate a two-step growth process which allows the formation of self-assembled, completely continuous graphene on liquid Cu. After the formation of full coverage on the liquid Cu, grain boundaries were revealed via selective hydrogen etching and the original grain boundaries were clearly resolved. This result indicates that, while the flakes self-assembled with the same orientation, there still remain structural defects, gaps and voids that were not resolved by optical microscopy or scanning electron microscopy. To overcome this limitation, the two-step growth process was employed, consisting of a sequential process of a normal single-layer graphene growth and self-assembly process with a low carbon flux, followed by the final stage of graphene growth at a high degree of supersaturation with a high carbon flux. Continuity of the flakes was verified via hydrogen etching and a NaCl-assisted oxidation process, as well as by measuring the electrical properties of the graphene grown by the two-step process. Two-step growth can provide a continuous graphene layer, but commensurate stitching should be further studied.Graphene growth on liquid Cu has received great interest, owing to the self-assembly behavior of hexagonal graphene flakes with aligned orientation and to the possibility of forming a single grain of graphene through a commensurate growth of these graphene flakes. Here, we propose and demonstrate a two-step growth process which allows the formation of self-assembled, completely continuous graphene on liquid Cu. After the formation of full coverage on the liquid Cu, grain boundaries were revealed via selective hydrogen etching and the original grain boundaries were clearly resolved. This result indicates that, while the flakes self-assembled with the same orientation, there still remain structural defects, gaps and voids that were not resolved by optical microscopy or scanning electron microscopy. To overcome this limitation, the two-step growth process was employed, consisting of a sequential process of a normal single-layer graphene growth and self-assembly process with a low carbon flux, followed by the final stage of graphene growth at a high degree of supersaturation with a high carbon flux. Continuity of the flakes was verified via hydrogen etching and a NaCl-assisted oxidation process, as well as by measuring the electrical properties of the graphene grown by the two-step process. Two-step growth can provide a continuous graphene layer, but commensurate stitching should be further studied. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03352g
NASA Astrophysics Data System (ADS)
Sathyaseelan, V. S.; Rufus, A. L.; Chandramohan, P.; Subramanian, H.; Velmurugan, S.
2015-12-01
Full system decontamination of Primary Heat Transport (PHT) system of Pressurised Heavy Water Reactors (PHWRs) resulted in low decontamination factors (DF) on stainless steel (SS) surfaces. Hence, studies were carried out with 403 SS and 410 SS that are the material of construction of "End-Fitting body" and "End-Fitting Liner tubes". Three formulations were evaluated for the dissolution of passive films formed over these alloys viz., i) Two-step process consisting of oxidation and reduction reactions, ii) Dilute Chemical Decontamination (DCD) and iii) High Temperature Process. The two-step and high temperature processes could dissolve the oxide completely while the DCD process could remove only 60%. Various techniques like XRD, Raman spectroscopy and SEM-EDX were used for assessing the dissolution process. The two-step process is time consuming, laborious while the high temperature process is less time consuming and is recommended for SS decontamination.
Godah, Mohammad W; Abdul Khalek, Rima A; Kilzar, Lama; Zeid, Hiba; Nahlawi, Acile; Lopes, Luciane Cruz; Darzi, Andrea J; Schünemann, Holger J; Akl, Elie A
2016-12-01
Low- and middle-income countries adapt World Health Organization (WHO) guidelines instead of de novo development for financial, epidemiologic, sociopolitical, cultural, organizational, and other reasons. To systematically evaluate reported processes used in the adaptation of WHO guidelines for human immunodeficiency virus (HIV) and tuberculosis (TB). We searched three online databases/repositories: United States Agency for International Development (USAID) AIDS Support and Technical Resources - Sector One program (AIDSTAR-One) National Treatment Database; the AIDSspace Guideline Repository, and WHO Database of national HIV and TB guidelines. We assessed the rigor and quality of reported adaptation methodology using the ADAPTE process as benchmark. Of 170 eligible guidelines, only 32 (19%) reported documentation on the adaptation process. The median and interquartile range of the number of ADAPTE steps fulfilled by the eligible guidelines were 11.5 (10, 13.5) (out of 23 steps). The number of guidelines (out of 32 steps) fulfilling each ADAPTE step was 18 (interquartile range, 5-27). Seventeen of 32 guidelines (53%) met all steps relevant to the setup phase, whereas none met all steps relevant to the adaptation phase. The number of well-documented adaptation methodologies in national HIV and/or TB guidelines is very low. There is a need for the use of standardized and systematic framework for guideline adaptation and improved reporting of processes used. Copyright © 2016 Elsevier Inc. All rights reserved.
Design and operation of a continuous integrated monoclonal antibody production process.
Steinebach, Fabian; Ulmer, Nicole; Wolf, Moritz; Decker, Lara; Schneider, Veronika; Wälchli, Ruben; Karst, Daniel; Souquet, Jonathan; Morbidelli, Massimo
2017-09-01
The realization of an end-to-end integrated continuous lab-scale process for monoclonal antibody manufacturing is described. For this, a continuous cultivation with filter-based cell-retention, a continuous two column capture process, a virus inactivation step, a semi-continuous polishing step (twin-column MCSGP), and a batch-wise flow-through polishing step were integrated and operated together. In each unit, the implementation of internal recycle loops allows to improve the performance: (a) in the bioreactor, to simultaneously increase the cell density and volumetric productivity, (b) in the capture process, to achieve improved capacity utilization at high productivity and yield, and (c) in the MCSGP process, to overcome the purity-yield trade-off of classical batch-wise bind-elute polishing steps. Furthermore, the design principles, which allow the direct connection of these steps, some at steady state and some at cyclic steady state, as well as straight-through processing, are discussed. The setup was operated for the continuous production of a commercial monoclonal antibody, resulting in stable operation and uniform product quality over the 17 cycles of the end-to-end integration. The steady-state operation was fully characterized by analyzing at the outlet of each unit at steady state the product titer as well as the process (HCP, DNA, leached Protein A) and product (aggregates, fragments) related impurities. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1303-1313, 2017. © 2017 American Institute of Chemical Engineers.
Organic thin film transistor with a simplified planar structure
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yu, Jungsheng; Zhong, Jian; Jiang, Yadong
2009-05-01
Organic thin film transistor (OTFT) with a simplified planar structure is described. The gate electrode and the source/drain electrodes of OTFT are processed in one planar structure. And these three electrodes are deposited on the glass substrate by DC sputtering technology using Cr/Ni target. Then the electrode layouts of different width length ratio are made by photolithography technology at the same time. Only one step of deposition and one step of photolithography is needed while conventional process takes at least two steps of deposition and two steps of photolithography. Metal is first prepared on the other side of glass substrate and electrode is formed by photolithography. Then source/drain electrode is prepared by deposition and photolithography on the side with the insulation layer. Compared to conventional process of OTFTs, the process in this work is simplified. After three electrodes prepared, the insulation layer is made by spin coating method. The organic material of polyimide is used as the insulation layer. A small molecular material of pentacene is evaporated on the insulation layer using vacuum deposition as the active layer. The process of OTFTs needs only three steps totally. A semi-auto probe stage is used to connect the three electrodes and the probe of the test instrument. A charge carrier mobility of 0.3 cm2 /V s, is obtained from OTFTs on glass substrates with and on/off current ratio of 105. The OTFTs with the planar structure using simplified process can simplify the device process and reduce the fabrication cost.
A process for the preparation of cysteine from cystine
Chang, Shih-Ger; Liu, David K.; Griffiths, Elizabeth A.; Littlejohn, David
1989-01-01
The present invention in one aspect relates to a process for the simultaneous removal of NO.sub.x and SO.sub.2 from a fluid stream comprising mixtures thereof and in another aspect relates to the separation, use and/or regeneration of various chemicals contaminated or spent in the process and which includes the steps of: (A) contacting the fluid stream at a temperature of between about 105.degree. and 180.degree. C. with a liquid aqueous slurry or solution comprising an effective amount of an iron chelate of an amino acid moiety having at least one --SH group; (B) separating the fluid stream from the particulates formed in step (A) comprising the chelate of the amino acid moiety and fly ash; (C) washing and separating the particulates of step (B) with an aqeous solution having a pH value of between about 5 to 8; (D) subsequently washing and separating the particulates of step (C) with a strongly acidic aqueous solution having a pH value of between about 1 to 3; (E) washing and separating the particulates of step (D) with an basic aqueous solution having a pH value of between about 9 to 12; (F) optionally adding additional amino acid moiety, iron (II) and alkali to the aqueous liquid from step (D) to produce an aqueous solution or slurry similar to that in step (A) having a pH value of between about 4 to 12; and (G) recycling the aqueous slurry of step (F) to the contacting zone of step (A). Steps (D) and (E) can be carried out in the reverse sequence, however the preferred order is (D) and then (E). In a preferred embodiment the present invention provides an improved process for the preparation (regeneration) of cysteine from cystine, which includes reacting an aqueous solution of cystine at a pH of between about 9 to 13 with a reducing agent selected from hydrogen sulfide or alkali metal sulfides, sulfur dioxide, an alkali metal sulfite or mixtures thereof for a time and at a temperature effective to cleave and reduce the cystine to cysteine with subsequent recovery of the cysteine. In another preferred embodiment the present invention provides a process for the removal of NO.sub.x, SO.sub.2 and particulates from a fluid stream which includes the steps of (A) injecting into a reaction zone an aqueous solution itself comprising (i) an amino acid moiety selected from those described above; (ii) iron (II) ion; and (iii) an alkali, wherein the aqueous solution has a pH of between about 4 and 11; followed by solids separation and washing as is described in steps (B), (C), (D) and (E) above. The overall process is useful to reduce acid rain components from combustion gas sources.
Photopolymerization Of Levitated Droplets
NASA Technical Reports Server (NTRS)
Rembaum, Alan; Rhim, Won-Kyu; Hyson, Michael T.; Chang, Manchium
1989-01-01
Experimental containerless process combines two established techniques to make variety of polymeric microspheres. In single step, electrostatically-levitated monomer droplets polymerized by ultraviolet light. Faster than multiple-step emulsion polymerization process used to make microspheres. Droplets suspended in cylindrical quadrupole electrostatic levitator. Alternating electrostatic field produces dynamic potential along axis. Process enables tailoring of microspheres for medical, scientific, and industrial applications.
Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo
2000-01-01
A treatment process for a hydrogen-containing off-gas stream from a refinery, petrochemical plant or the like. The process includes three separation steps: condensation, membrane separation and hydrocarbon fraction separation. The membrane separation step is characterized in that it is carried out under conditions at which the membrane exhibits a selectivity in favor of methane over hydrogen of at least about 2.5.
Seven-Step Problem-Based Learning in an Interaction Design Course
ERIC Educational Resources Information Center
Schultz, Nette; Christensen, Hans Peter
2004-01-01
The objective in this paper is the implementation of the highly structured seven-step problem-based learning (PBL) procedure as part of the learning process in a human-computer interaction (HCI) design course at the Technical University of Denmark, taking into account the common learning processes in PBL and the interaction design process. These…
NASA Technical Reports Server (NTRS)
Qader, S. A.
1984-01-01
Steam injection improves yield and quality of product. Single step process for liquefying coal increases liquid yield and reduces hydrogen consumption. Principal difference between this and earlier processes includes injection of steam into reactor. Steam lowers viscosity of liquid product, so further upgrading unnecessary.
47 CFR 1.10009 - What are the steps for electronic filing?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are the steps for electronic filing? 1... International Bureau Filing System § 1.10009 What are the steps for electronic filing? (a) Step 1: Register for... an FRN, go to Step 2. (2) In order to process your electronic application, you must have an FRN. You...
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
NASA Technical Reports Server (NTRS)
Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat
2008-01-01
This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.
Step by Step to Smoke-Free Schools.
ERIC Educational Resources Information Center
VanSciver, James H.; Roberts, H. Earl
1989-01-01
This ERIC digest discusses ways of effectively banning smoking in schools so that controversies do not continue after implementation of the policy. By advocating a process approach, the document cites steps taken by the Lake Forest School Board to prohibit smoking in and around school grounds. Step one involved committee planning involving…
2007-12-01
37 3. Poka - yoke ............................................................................................37 4. Systems for...Standard operating procedures • Visual displays for workflow and communication • Total productive maintenance • Poka - yoke techniques to prevent...process step or eliminating non-value-added steps, and reducing the seven common wastes, will decrease the total time of a process. 3. Poka - yoke
Thermochemical water decomposition processes
NASA Technical Reports Server (NTRS)
Chao, R. E.
1974-01-01
Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.
Automatic diagnosis of malaria based on complete circle-ellipse fitting search algorithm.
Sheikhhosseini, M; Rabbani, H; Zekri, M; Talebi, A
2013-12-01
Diagnosis of malaria parasitemia from blood smears is a subjective and time-consuming task for pathologists. The automatic diagnostic process will reduce the diagnostic time. Also, it can be worked as a second opinion for pathologists and may be useful in malaria screening. This study presents an automatic method for malaria diagnosis from thin blood smears. According to this fact that malaria life cycle is started by forming a ring around the parasite nucleus, the proposed approach is mainly based on curve fitting to detect parasite ring in the blood smear. The method is composed of six main phases: stain object extraction step, which extracts candidate objects that may be infected by malaria parasites. This phase includes stained pixel extraction step based on intensity and colour, and stained object segmentation by defining stained circle matching. Second step is preprocessing phase which makes use of nonlinear diffusion filtering. The process continues with detection of parasite nucleus from resulted image of previous step according to image intensity. Fourth step introduces a complete search process in which the circle search step identifies the direction and initial points for direct least-square ellipse fitting algorithm. Furthermore in the ellipse searching process, although parasite shape is completed undesired regions with high error value are removed and ellipse parameters are modified. Features are extracted from the parasite candidate region instead of whole candidate object in the fifth step. By employing this special feature extraction way, which is provided by special searching process, the necessity of employing clump splitting methods is removed. Also, defining stained circle matching process in the first step speeds up the whole procedure. Finally, a series of decision rules are applied on the extracted features to decide on the positivity or negativity of malaria parasite presence. The algorithm is applied on 26 digital images which are provided from thin blood smear films. The images are contained 1274 objects which may be infected by parasite or healthy. Applying the automatic identification of malaria on provided database showed a sensitivity of 82.28% and specificity of 98.02%. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Coastal Algorithms and On-Demand Processing- The Lessons Learnt from CoastColour for Sentinel 3
NASA Astrophysics Data System (ADS)
Brockmann, Carsten; Doerffer, Roland; Boettcher, Martin; Kramer, Uwe; Zuhlke, Marco; Pinnock, Simon
2015-12-01
The ESA DUE CoastColour Project has been initiated to provide water quality products for important costal zones globally. A new 5 component bio-optical model was developed and used in a 3-step approach for regional processing of ocean colour data. The L1P step consists of radiometric and geometric system corrections, and top-of-atmosphere pixel classification including cloud screening, sun glint risk masking or detection of floating vegetation. The second step includes the atmospheric correction and is providing the L2R product, which comprises marine reflectances with error characterisation and normalisation. The third step is the in-water processing which produces IOPs, attenuation coefficient and water constituent concentrations. Each of these steps will benefit from the additional bands on OLCI. The 5 component bio-optical model will already be used in the standard ESA processing of OLCI, and also part of the pixel classification methods will be part of the standard products. Other algorithm adaptation are in preparation. Another important advantage of the CoastColour approach is the highly configurable processing chain which allows adaptation to the individual characteristics of the area of interest, temporal window, algorithm parametrisation and processing chain configuration. This flexibility is made available to data users through the CoastColour on-demand processing service. The complete global MERIS Full and Reduced Resolution data archive is accessible, covering the time range from 17. May 2002 until 08. April 2012, which is almost 200TB of in-put data available online. The CoastColour on-demand processing service can serve as a model for hosted processing, where the software is moved to the data instead of moving the data to the users, which will be a challenge with the large amount of data coming from Sentinel 3.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Ultramap: the all in One Photogrammetric Solution
NASA Astrophysics Data System (ADS)
Wiechert, A.; Gruber, M.; Karner, K.
2012-07-01
This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.
Wang, Chen; Lv, Shidong; Wu, Yuanshuang; Lian, Ming; Gao, Xuemei; Meng, Qingxiong
2016-10-01
Biluochun is a typical non-fermented tea and is also famous for its unique aroma in China. Few studies have been performed to evaluate the effect of the manufacturing process on the formation and content of its aroma. The volatile components were extracted at different manufacturing process steps of Biluochun green tea using fully automated headspace solid-phase microextraction (HS-SPME) and further characterised by gas chromatography-mass spectrometry (GC-MS). Among 67 volatile components collected, the fractions of linalool oxides, β-ionone, phenylacetaldehyde, aldehydes, ketones, and nitrogen compounds were increased while alcohols and hydrocarbons declined during the manufacturing process. The aroma compounds decreased the most during the drying steps. We identified a number of significantly changed components that can be used as markers and quality control during the producing process of Biluochun. The drying step played a major role in the aroma formation of green tea products and should be the most important step for quality control. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
NASA Astrophysics Data System (ADS)
Zhang, Haijie; Chen, Shilu; Zhong, Jie; Zhang, Shaowen; Zhang, Yunhong; Zhang, Xiuhui; Li, Zesheng; Zeng, Xiao Cheng
2018-03-01
Sulfate is one of the most important components in the aerosol due to its key role in air pollution and global climate change. Recent work has suggested that reactive nitrogen chemistry in aqueous water can explain the missing source of sulfate in the aqueous water. Herein, we have mapped out the energy profile of the oxidization process of SO2 leading from NO2 and two feasible three-step mechanisms have been proposed. For the oxidation of HOSO2- and HSO3- by the dissolved NO2 in weakly acidic and neutral aerosol (pH ≤ 7), the main contribution to the missing sulfate production comes from the oxidation of HOSO2-. The whole process is a self-sustaining process. For the oxidation of SO32- in alkaline aerosol (pH > 7), the third step - decomposition step of H2O or hydrolysis of SO3 step which are two parallel processes are the rate-limiting steps. The present results are of avail to better understand the missing source of sulfate in the aerosol and hence may lead to better science-based solutions for resolving the severe haze problems in China.
NASA Astrophysics Data System (ADS)
Han, S. T.; Shu, X. D.; Shchukin, V.; Kozhevnikova, G.
2018-06-01
In order to achieve reasonable process parameters in forming multi-step shaft by cross wedge rolling, the research studied the rolling-forming process multi-step shaft on the DEFORM-3D finite element software. The interactive orthogonal experiment was used to study the effect of the eight parameters, the first section shrinkage rate φ1, the first forming angle α1, the first spreading angle β1, the first spreading length L1, the second section shrinkage rate φ2, the second forming angle α2, the second spreading angle β2 and the second spreading length L2, on the quality of shaft end and the microstructure uniformity. By using the fuzzy mathematics comprehensive evaluation method and the extreme difference analysis, the influence degree of the process parameters on the quality of the multi-step shaft is obtained: β2>φ2L1>α1>β1>φ1>α2L2. The results of the study can provide guidance for obtaining multi-stepped shaft with high mechanical properties and achieving near net forming without stub bar in cross wedge rolling.
NASA Technical Reports Server (NTRS)
Anton, Claire E. (Inventor)
1993-01-01
Optimum strengthening of a superplastically formed aluminum-lithium alloy structure is achieved via a thermal processing technique which eliminates the conventional step of solution heat-treating immediately following the step of superplastic forming of the structure. The thermal processing technique involves quenching of the superplastically formed structure using static air, forced air or water quenching.
ERIC Educational Resources Information Center
Schwing, Carl M.
This guide describes standard operating job procedures for the digestion process of wastewater treatment facilities. This process is for reducing the volume of sludge to be treated in subsequent units and to reduce the volatile content of sludge. The guide gives step-by-step instructions for pre-startup, startup, continuous operating, shutdown,…
ERIC Educational Resources Information Center
Deal, Gerald A.; Montgomery, James A.
This guide describes standard operating job procedures for the grit removal process of wastewater treatment plants. Step-by-step instructions are given for pre-start up inspection, start-up, continuous operation, and shut-down procedures. A description of the equipment used in the process is given. Some theoretical material is presented. (BB)
ERIC Educational Resources Information Center
Petrasek, Al, Jr.
This guide describes the standard operating job procedures for the tertiary multimedia filtration process of wastewater treatment plants. The major objective of the filtration process is the removal of suspended solids from the reclaimed wastewater. The guide gives step-by-step instructions for pre-start up, start-up, continuous operation, and…
ERIC Educational Resources Information Center
Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana
2007-01-01
This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…
Data processing has major impact on the outcome of quantitative label-free LC-MS analysis.
Chawade, Aakash; Sandin, Marianne; Teleman, Johan; Malmström, Johan; Levander, Fredrik
2015-02-06
High-throughput multiplexed protein quantification using mass spectrometry is steadily increasing in popularity, with the two major techniques being data-dependent acquisition (DDA) and targeted acquisition using selected reaction monitoring (SRM). However, both techniques involve extensive data processing, which can be performed by a multitude of different software solutions. Analysis of quantitative LC-MS/MS data is mainly performed in three major steps: processing of raw data, normalization, and statistical analysis. To evaluate the impact of data processing steps, we developed two new benchmark data sets, one each for DDA and SRM, with samples consisting of a long-range dilution series of synthetic peptides spiked in a total cell protein digest. The generated data were processed by eight different software workflows and three postprocessing steps. The results show that the choice of the raw data processing software and the postprocessing steps play an important role in the final outcome. Also, the linear dynamic range of the DDA data could be extended by an order of magnitude through feature alignment and a charge state merging algorithm proposed here. Furthermore, the benchmark data sets are made publicly available for further benchmarking and software developments.
NASA Astrophysics Data System (ADS)
Saletti, M.; Molnar, P.; Hassan, M. A.
2017-12-01
Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.
Porsby, Cisse Hedegaard; Vogel, Birte Fonnesbech; Mohr, Mona; Gram, Lone
2008-03-20
Cold-smoked salmon is a ready-to-eat product in which Listeria monocytogenes sometimes can grow to high numbers. The bacterium can colonize the processing environment and it is believed to survive or even grow during the processing steps. The purpose of the present study was to determine if the steps in the processing of cold-smoked salmon affect survival and subsequent growth of a persistent strain of L. monocytogenes to a lesser degree than presumed non-persistent strains. We used a sequence of experiments increasing in complexity: (i) small salmon blocks salted, smoked or dried under model conditions, (ii) fillets of salmon cold-smoked in a pilot plant and finally, (iii) assessment of the bacterial levels before and after processing during commercial scale production. L. monocytogenes proliferated on salmon blocks that were brined or dipped in liquid smoke and left at 25 degrees C in a humidity chamber for 24 h. However, combining brining and liquid smoke with a drying (25 degrees C) step reduced the bacterium 10-100 fold over a 24 h period. Non-salted, brine injected or dry salted salmon fillets were surface inoculated with L. monocytogenes and cold-smoked in a pilot plant. L. monocytogenes was reduced from 10(3) to 10-10(2) CFU/cm(2) immediately after cold-smoking. The greatest reductions were observed in dry salted and brine injected fillets as compared to cold-smoking of non-salted fresh fillets. Levels of L. monocytogenes decreased further when the cold-smoked fish was vacuum-packed and stored at 5 degrees C. A similar decline was seen when inoculating brine injected fillets after cold-smoking. High phenol concentrations are a likely cause of this marked growth inhibition. In a commercial production facility, the total viable count of salmon fillets was reduced 10-1000 fold by salting, cold-smoking and process-freezing (a freezing step after smoking and before slicing). The prevalence of L. monocytogenes in the commercial production facility was too low to determine any quantitative effects, however, one of nine samples was positive before processing and none after. Taken together, the processing steps involved in cold-smoking of salmon are bactericidal and reduce, but do not eliminate L. monocytogenes. A persistent strain was no less sensitive to the processing steps than a clinical strain or strain EGD.
Six-sigma application in tire-manufacturing company: a case study
NASA Astrophysics Data System (ADS)
Gupta, Vikash; Jain, Rahul; Meena, M. L.; Dangayach, G. S.
2017-09-01
Globalization, advancement of technologies, and increment in the demand of the customer change the way of doing business in the companies. To overcome these barriers, the six-sigma define-measure-analyze-improve-control (DMAIC) method is most popular and useful. This method helps to trim down the wastes and generating the potential ways of improvement in the process as well as service industries. In the current research, the DMAIC method was used for decreasing the process variations of bead splice causing wastage of material. This six-sigma DMAIC research was initiated by problem identification through voice of customer in the define step. The subsequent step constitutes of gathering the specification data of existing tire bead. This step was followed by the analysis and improvement steps, where the six-sigma quality tools such as cause-effect diagram, statistical process control, and substantial analysis of existing system were implemented for root cause identification and reduction in process variation. The process control charts were used for systematic observation and control the process. Utilizing DMAIC methodology, the standard deviation was decreased from 2.17 to 1.69. The process capability index (C p) value was enhanced from 1.65 to 2.95 and the process performance capability index (C pk) value was enhanced from 0.94 to 2.66. A DMAIC methodology was established that can play a key role for reducing defects in the tire-manufacturing process in India.
Process for removing an organic compound from water
Baker, Richard W.; Kaschemekat, Jurgen; Wijmans, Johannes G.; Kamaruddin, Henky D.
1993-12-28
A process for removing organic compounds from water is disclosed. The process involves gas stripping followed by membrane separation treatment of the stripping gas. The stripping step can be carried out using one or multiple gas strippers and using air or any other gas as stripping gas. The membrane separation step can be carried out using a single-stage membrane unit or a multistage unit. Apparatus for carrying out the process is also disclosed. The process is particularly suited for treatment of contaminated groundwater or industrial wastewater.
A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams
NASA Astrophysics Data System (ADS)
Molnar, P.
2012-04-01
Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.
SAR correlation technique - An algorithm for processing data with large range walk
NASA Technical Reports Server (NTRS)
Jin, M.; Wu, C.
1983-01-01
This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. J. Galyean; A. M. Whaley; D. L. Kelly
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less
How to Conduct Surveys: A Step-by-Step Guide. Sixth Edition
ERIC Educational Resources Information Center
Fink, Arlene
2016-01-01
Packed with new topics that reflect today's challenges, the Sixth Edition of the bestselling "How to Conduct Surveys" guides readers through the process of developing their own rigorous surveys and evaluating the credibility and transparency of surveys created by others. Offering practical, step-by-step advice and written in the same…
The energy demand of distillation-based systems for ethanol recovery and dehydration can be significant, particularly for dilute solutions. An alternative separation process integrating vapor stripping with a vapor compression step and a vapor permeation membrane separation step...
The energy demand of distillation-based systems for ethanol recovery and dehydration can be significant, particularly for dilute solutions. An alternative separation process integrating vapor stripping with a vapor compression step and a vapor permeation membrane separation step,...
BACKGROUND: Energy efficient alternatives to distillation for alcohol recovery from dilute solution are needed to improve biofuel sustainability. A process integrating steam stripping with a vapor compression step and a vapor permeation membrane separation step is proposed. The...
Expedited vocational assessment under the sequential evaluation process. Final rules.
2012-07-25
We are revising our rules to give adjudicators the discretion to proceed to the fifth step of the sequential evaluation process for assessing disability when we have insufficient information about a claimant's past relevant work history to make the findings required for step 4. If an adjudicator finds at step 5 that a claimant may be unable to adjust to other work existing in the national economy, the adjudicator will return to the fourth step to develop the claimant's work history and make a finding about whether the claimant can perform his or her past relevant work. We expect that this new expedited process will not disadvantage any claimant or change the ultimate conclusion about whether a claimant is disabled, but it will promote administrative efficiency and help us make more timely disability determinations and decisions.
In-depth analysis and characterization of a dual damascene process with respect to different CD
NASA Astrophysics Data System (ADS)
Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Kim, Wan-Soo; Thrun, Xaver
2018-03-01
In a 200 mm high volume environment, we studied data from a dual damascene process. Dual damascene is a combination of lithography, etch and CMP that is used to create copper lines and contacts in one single step. During these process steps, different metal CD are measured by different measurement methods. In this study, we analyze the key numbers of the different measurements after different process steps and develop simple models to predict the electrical behavior* . In addition, radial profiles have been analyzed of both inline measurement parameters and electrical parameters. A matching method was developed based on inline and electrical data. Finally, correlation analysis for radial signatures is presented that can be used to predict excursions in electrical signatures.
DPPP: Default Pre-Processing Pipeline
NASA Astrophysics Data System (ADS)
van Diepen, Ger; Dijkema, Tammo Jan
2018-04-01
DPPP (Default Pre-Processing Pipeline, also referred to as NDPPP) reads and writes radio-interferometric data in the form of Measurement Sets, mainly those that are created by the LOFAR telescope. It goes through visibilities in time order and contains standard operations like averaging, phase-shifting and flagging bad stations. Between the steps in a pipeline, the data is not written to disk, making this tool suitable for operations where I/O dominates. More advanced procedures such as gain calibration are also included. Other computing steps can be provided by loading a shared library; currently supported external steps are the AOFlagger (ascl:1010.017) and a bridge that enables loading python steps.
A step-by-step methodology for enterprise interoperability projects
NASA Astrophysics Data System (ADS)
Chalmeta, Ricardo; Pazos, Verónica
2015-05-01
Enterprise interoperability is one of the key factors for enhancing enterprise competitiveness. Achieving enterprise interoperability is an extremely complex process which involves different technological, human and organisational elements. In this paper we present a framework to help enterprise interoperability. The framework has been developed taking into account the three domains of interoperability: Enterprise Modelling, Architecture and Platform and Ontologies. The main novelty of the framework in comparison to existing ones is that it includes a step-by-step methodology that explains how to carry out an enterprise interoperability project taking into account different interoperability views, like business, process, human resources, technology, knowledge and semantics.
Calculation tool for transported geothermal energy using two-step absorption process
Kyle Gluesenkamp
2016-02-01
This spreadsheet allows the user to calculate parameters relevant to techno-economic performance of a two-step absorption process to transport low temperature geothermal heat some distance (1-20 miles) for use in building air conditioning. The parameters included are (1) energy density of aqueous LiBr and LiCl solutions, (2) transportation cost of trucking solution, and (3) equipment cost for the required chillers and cooling towers in the two-step absorption approach. More information is available in the included public report: "A Technical and Economic Analysis of an Innovative Two-Step Absorption System for Utilizing Low-Temperature Geothermal Resources to Condition Commercial Buildings"
Brown, Derek W; Shulman, Adam; Hudson, Alana; Smith, Wendy; Fisher, Brandon; Hollon, Jon; Pipman, Yakov; Van Dyk, Jacob; Einck, John
2014-11-01
We present a practical, generic, easy-to-use framework for the implementation of new radiation therapy technologies and treatment techniques in low-income countries. The framework is intended to standardize the implementation process, reduce the effort involved in generating an implementation strategy, and provide improved patient safety by reducing the likelihood that steps are missed during the implementation process. The 10 steps in the framework provide a practical approach to implementation. The steps are, 1) Site and resource assessment, 2) Evaluation of equipment and funding, 3) Establishing timelines, 4) Defining the treatment process, 5) Equipment commissioning, 6) Training and competency assessment, 7) Prospective risk analysis, 8) System testing, 9) External dosimetric audit and incident learning, and 10) Support and follow-up. For each step, practical advice for completing the step is provided, as well as links to helpful supplementary material. An associated checklist is provided that can be used to track progress through the steps in the framework. While the emphasis of this paper is on addressing the needs of low-income countries, the concepts also apply in high-income countries. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Comparability of automated human induced pluripotent stem cell culture: a pilot study.
Archibald, Peter R T; Chandra, Amit; Thomas, Dave; Chose, Olivier; Massouridès, Emmanuelle; Laâbi, Yacine; Williams, David J
2016-12-01
Consistent and robust manufacturing is essential for the translation of cell therapies, and the utilisation automation throughout the manufacturing process may allow for improvements in quality control, scalability, reproducibility and economics of the process. The aim of this study was to measure and establish the comparability between alternative process steps for the culture of hiPSCs. Consequently, the effects of manual centrifugation and automated non-centrifugation process steps, performed using TAP Biosystems' CompacT SelecT automated cell culture platform, upon the culture of a human induced pluripotent stem cell (hiPSC) line (VAX001024c07) were compared. This study, has demonstrated that comparable morphologies and cell diameters were observed in hiPSCs cultured using either manual or automated process steps. However, non-centrifugation hiPSC populations exhibited greater cell yields, greater aggregate rates, increased pluripotency marker expression, and decreased differentiation marker expression compared to centrifugation hiPSCs. A trend for decreased variability in cell yield was also observed after the utilisation of the automated process step. This study also highlights the detrimental effect of the cryopreservation and thawing processes upon the growth and characteristics of hiPSC cultures, and demonstrates that automated hiPSC manufacturing protocols can be successfully transferred between independent laboratories.
Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid
2017-10-21
Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.
Quality measurement and benchmarking of HPV vaccination services: a new approach.
Maurici, Massimo; Paulon, Luca; Campolongo, Alessandra; Meleleo, Cristina; Carlino, Cristiana; Giordani, Alessandro; Perrelli, Fabrizio; Sgricia, Stefano; Ferrante, Maurizio; Franco, Elisabetta
2014-01-01
A new measurement process based upon a well-defined mathematical model was applied to evaluate the quality of human papillomavirus (HPV) vaccination centers in 3 of 12 Local Health Units (ASLs) within the Lazio Region of Italy. The quality aspects considered for evaluation were communicational efficiency, organizational efficiency and comfort. The overall maximum achievable value was 86.10%, while the HPV vaccination quality scores for ASL1, ASL2 and ASL3 were 73.07%, 71.08%, and 67.21%, respectively. With this new approach it is possible to represent the probabilistic reasoning of a stakeholder who evaluates the quality of a healthcare provider. All ASLs had margins for improvements and optimal quality results can be assessed in terms of better performance conditions, confirming the relationship between the resulting quality scores and HPV vaccination coverage. The measurement process was structured into three steps and involved four stakeholder categories: doctors, nurses, parents and vaccinated women. In Step 1, questionnaires were administered to collect different stakeholders' points of view (i.e., subjective data) that were elaborated to obtain the best and worst performance conditions when delivering a healthcare service. Step 2 of the process involved the gathering of performance data during the service delivery (i.e., objective data collection). Step 3 of the process involved the elaboration of all data: subjective data from step 1 are used to define a "standard" to test objective data from step 2. This entire process led to the creation of a set of scorecards. Benchmarking is presented as a result of the probabilistic meaning of the evaluated scores.
Curriculum Redesign in Veterinary Medicine: Part I.
Chaney, Kristin P; Macik, Maria L; Turner, Jacqueline S; Korich, Jodi A; Rogers, Kenita S; Fowler, Debra; Scallan, Elizabeth M; Keefe, Lisa M
Curricular review is considered a necessary component for growth and enhancement of academic programs and requires time, energy, creativity, and persistence from both faculty and administration. At Texas A&M College of Veterinary Medicine & Biomedical Sciences (TAMU), the faculty and administration partnered with the university's Center for Teaching Excellence to create a faculty-driven, data-enhanced curricular redesign process. The 8-step process begins with the formation of a dedicated faculty curriculum design team to drive the redesign process and to support the college curriculum committee. The next steps include defining graduate outcomes and mapping the current curriculum to identify gaps and redundancies across the curriculum. Data are collected from internal and external stakeholders including veterinary students, faculty, alumni, and employers of graduates. Data collected through curriculum mapping and stakeholder engagement substantiate the curriculum redesign. The guidelines, supporting documents, and 8-step process developed at TAMU are provided to assist other veterinary schools in successful curricular redesign. This is the first of a two-part report that provides the background, context, and description of the process for charting the course for curricular change. The process involves defining expected learning outcomes for new graduates, conducting a curriculum mapping exercise, and collecting stakeholder data for curricular evaluation (steps 1-4). The second part of the report describes the development of rubrics that were applied to the graduate learning outcomes (steps 5-8) and engagement of faculty during the implementation phases of data-driven curriculum change.
Roberts, Peter L
2014-01-01
The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.
Superhydrophobic aluminum alloy surfaces by a novel one-step process.
Saleema, N; Sarkar, D K; Paynter, R W; Chen, X-G
2010-09-01
A simple one-step process has been developed to render aluminum alloy surfaces superhydrophobic by immersing the aluminum alloy substrates in a solution containing NaOH and fluoroalkyl-silane (FAS-17) molecules. Scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and water contact angle measurements have been performed to characterize the morphological features, chemical composition and superhydrophobicity of the surfaces. The resulting surfaces provided a water contact angle as high as ∼162° and a contact angle hysteresis as low as ∼4°. The study indicates that it is possible to fabricate superhydrophobic aluminum surfaces easily and effectively without involving the traditional two-step processes.
High-volume workflow management in the ITN/FBI system
NASA Astrophysics Data System (ADS)
Paulson, Thomas L.
1997-02-01
The Identification Tasking and Networking (ITN) Federal Bureau of Investigation system will manage the processing of more than 70,000 submissions per day. The workflow manager controls the routing of each submission through a combination of automated and manual processing steps whose exact sequence is dynamically determined by the results at each step. For most submissions, one or more of the steps involve the visual comparison of fingerprint images. The ITN workflow manager is implemented within a scaleable client/server architecture. The paper describes the key aspects of the ITN workflow manager design which allow the high volume of daily processing to be successfully accomplished.
Step and Kink Dynamics in Inorganic and Protein Crystallization
NASA Technical Reports Server (NTRS)
Chernov, A. A.; Rashkovich, L. N.; Vekilov, P. G.; DeYoreo, J. J.
2004-01-01
Behavior of low-kink-density steps in solution growth and consequences for general understanding of spiral crystal growth processes will be overviewed. Also, influence of turbulence on step bunching and possibility to diminish this bunching will be presented.
Blumrich, Matthias A.; Salapura, Valentina
2010-11-02
An apparatus and method are disclosed for single-stepping coherence events in a multiprocessor system under software control in order to monitor the behavior of a memory coherence mechanism. Single-stepping coherence events in a multiprocessor system is made possible by adding one or more step registers. By accessing these step registers, one or more coherence requests are processed by the multiprocessor system. The step registers determine if the snoop unit will operate by proceeding in a normal execution mode, or operate in a single-step mode.
Shawyer, Frances; Enticott, Joanne C; Brophy, Lisa; Bruxner, Annie; Fossey, Ellie; Inder, Brett; Julian, John; Kakuma, Ritsuko; Weller, Penelope; Wilson-Evered, Elisabeth; Edan, Vrinda; Slade, Mike; Meadows, Graham N
2017-05-08
Recovery features strongly in Australian mental health policy; however, evidence is limited for the efficacy of recovery-oriented practice at the service level. This paper describes the Principles Unite Local Services Assisting Recovery (PULSAR) Specialist Care trial protocol for a recovery-oriented practice training intervention delivered to specialist mental health services staff. The primary aim is to evaluate whether adult consumers accessing services where staff have received the intervention report superior recovery outcomes compared to adult consumers accessing services where staff have not yet received the intervention. A qualitative sub-study aims to examine staff and consumer views on implementing recovery-oriented practice. A process evaluation sub-study aims to articulate important explanatory variables affecting the interventions rollout and outcomes. The mixed methods design incorporates a two-step stepped-wedge cluster randomized controlled trial (cRCT) examining cross-sectional data from three phases, and nested qualitative and process evaluation sub-studies. Participating specialist mental health care services in Melbourne, Victoria are divided into 14 clusters with half randomly allocated to receive the staff training in year one and half in year two. Research participants are consumers aged 18-75 years who attended the cluster within a previous three-month period either at baseline, 12 (step 1) or 24 months (step 2). In the two nested sub-studies, participation extends to cluster staff. The primary outcome is the Questionnaire about the Process of Recovery collected from 756 consumers (252 each at baseline, step 1, step 2). Secondary and other outcomes measuring well-being, service satisfaction and health economic impact are collected from a subset of 252 consumers (63 at baseline; 126 at step 1; 63 at step 2) via interviews. Interview-based longitudinal data are also collected 12 months apart from 88 consumers with a psychotic disorder diagnosis (44 at baseline, step 1; 44 at step 1, step 2). cRCT data will be analyzed using multilevel mixed-effects modelling to account for clustering and some repeated measures, supplemented by thematic analysis of qualitative interview data. The process evaluation will draw on qualitative, quantitative and documentary data. Findings will provide an evidence-base for the continued transformation of Australian mental health service frameworks toward recovery. Australian and New Zealand Clinical Trial Registry: ACTRN12614000957695 . Date registered: 8 September 2014.
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
Enzymatic enrichment of egg-yolk phosphatidylcholine with alpha-linolenic acid.
Chojnacka, A; Gładkowski, W; Kiełbowicz, G; Wawrzeńczyk, C
2009-05-01
alpha-Linolenic acid (ALA) was incorporated at 28% into the sn-1 position of egg-yolk phospatidylcholine using Novozyme 435 in one-step transesterification process. Using phospholipase A(2) in a two-step process gave 25% incorporation of ALA into the sn-2 position.
One-Step Real-Image Reflection Holograms
ERIC Educational Resources Information Center
Buah-Bassuah, Paul K.; Vannoni, Maurizio; Molesini, Giuseppe
2007-01-01
A holographic process is presented where the object is made of the real image produced by a two-mirror system. Single-step reflection hologram recording is achieved. Details of the process are given, optics concepts are outlined and demonstrative results are presented. (Contains 6 figures and 2 footnotes.)
Seven Steps to Responsible Software Selection. ERIC Digest.
ERIC Educational Resources Information Center
Komoski, P. Kenneth; Plotnick, Eric
Microcomputers in schools contribute significantly to the learning process, and software selection is taken as seriously as the selection of text books. The seven step process for responsible software selection are: (1) analyzing needs, including the differentiation between needs and objectives; (2) specification of requirements; (3) identifying…
A three step supercritical process to improve the dissolution rate of eflucimibe.
Rodier, Elisabeth; Lochard, Hubert; Sauceau, Martial; Letourneau, Jean-Jacques; Freiss, Bernard; Fages, Jacques
2005-10-01
The aim of this study is to improve the dissolution properties of a poorly-soluble active substance, Eflucimibe by associating it with gamma-cyclodextrin. To achieve this objective, a new three-step process based on supercritical fluid technology has been proposed. First, Eflucimibe and cyclodextrin are co-crystallized using an anti-solvent process, dimethylsulfoxide being the solvent and supercritical carbon dioxide being the anti-solvent. Second, the co-crystallized powder is held in a static mode under supercritical conditions for several hours. This is the maturing step. Third, in a final stripping step, supercritical CO(2) is flowed through the matured powder to extract the residual solvent. The coupling of the first two steps brings about a significant synergistic effect to improve the dissolution rate of the drug. The nature of the entity obtained at the end of each step is discussed and some suggestions are made as to what happens in these operations. It is shown the co-crystallization ensures a good dispersion of both compounds and is rather insensitive to the operating parameters tested. The maturing step allows some dissolution-recrystallization to occur thus intensifying the intimate contact between the two compounds. Addition of water is necessary to make maturing effective as this is governed by the transfer properties of the medium. The stripping step allows extraction of the residual solvent but also removes some of the Eflucimibe which is the main drawback of this final stage.
Process for the synthesis of aliphatic alcohol-containing mixtures
Greene, Marvin I.; Gelbein, Abraham P.
1984-01-01
A process for the synthesis of mixtures which include saturated aliphatic alcohols is disclosed. In the first step of the process, the first catalyst activation stage, a catalyst, which comprises the oxides of copper, zinc, aluminum, potassium and one or two additional metals selected from the group consisting of chromium, magnesium, cerium, cobalt, thorium and lanthanum, is partially activated. In this step, a reducing gas stream, which includes hydrogen and at least one inert gas, flows past the catalyst at a space velocity of up to 5,000 liters (STP) per hour, per kilogram of catalyst. The partially activated catalyst is then subjected to the second step of the process, second-stage catalyst activation. In this step, the catalyst is contacted by an activation gas stream comprising hydrogen and carbon monoxide present in a volume ratio of 0.5:1 and 4:1, respectively, at a temperature of 200.degree. to 450.degree. C. and a pressure of between 35 and 200 atmospheres. The activation gas flows at a space velocity of from 1,000 to 20,000 liters (STP) per hour, per kilogram of catalyst. Second-stage activation continues until the catalyst is contacted with at least 500,000 liters (STP) of activation gas per kilogram of catalyst. The fully activated catalyst, in the third step of the process, contacts a synthesis gas stream comprising hydrogen and carbon monoxide.
Process for the synthesis of aliphatic alcohol-containing mixtures
Greene, M.I.; Gelbein, A.P.
1984-10-16
A process for the synthesis of mixtures which include saturated aliphatic alcohols is disclosed. In the first step of the process, the first catalyst activation stage, a catalyst, which comprises the oxides of copper, zinc, aluminum, potassium and one or two additional metals selected from the group consisting of chromium, magnesium, cerium, cobalt, thorium and lanthanum, is partially activated. In this step, a reducing gas stream, which includes hydrogen and at least one inert gas, flows past the catalyst at a space velocity of up to 5,000 liters (STP) per hour, per kilogram of catalyst. The partially activated catalyst is then subjected to the second step of the process, second-stage catalyst activation. In this step, the catalyst is contacted by an activation gas stream comprising hydrogen and carbon monoxide present in a volume ratio of 0.5:1 and 4:1, respectively, at a temperature of 200 to 450 C and a pressure of between 35 and 200 atmospheres. The activation gas flows at a space velocity of from 1,000 to 20,000 liters (STP) per hour, per kilogram of catalyst. Second-stage activation continues until the catalyst is contacted with at least 500,000 liters (STP) of activation gas per kilogram of catalyst. The fully activated catalyst, in the third step of the process, contacts a synthesis gas stream comprising hydrogen and carbon monoxide.
2006-05-01
dies. This process uses a laser beam to melt a controlled amount of injected powder on a base plate to deposit the first layer and on previous passes...Consolidation” to build functional net-shape components directly from metallic powder in one step [1-3]. The laser consolidation is a one-step computer-aided...A focused laser beam is irradiated on the substrate to create a molten pool, while metallic powder is injected simultaneously into the pool. A
ERIC Educational Resources Information Center
Perley, Gordon F.
This is a guide for standard operating job procedures for the pump station process of wastewater treatment plants. Step-by-step instructions are given for pre-start up inspection, start-up procedures, continuous routine operation procedures, and shut-down procedures. A general description of the equipment used in the process is given. Two…
Code of Federal Regulations, 2010 CFR
2010-04-01
... to process my application for the Housing Improvement Program? (a) The servicing housing office must... 25 Indians 1 2010-04-01 2010-04-01 false What are the steps that must be taken to process my application for the Housing Improvement Program? 256.14 Section 256.14 Indians BUREAU OF INDIAN AFFAIRS...
First Processing Steps and the Quality of Wild and Farmed Fish
Borderías, Antonio J; Sánchez-Alonso, Isabel
2011-01-01
First processing steps of fish are species-dependent and have common practices for wild and for farmed fish. Fish farming does, however, have certain advantages over traditional fisheries in that the processor can influence postmortem biochemistry and various quality parameters. This review summarizes information about the primary processing of fish based on the influence of catching, slaughtering, bleeding, gutting, washing, and filleting. Recommendations are given for the correct primary processing of fish. PMID:21535702
The Package-Based Development Process in the Flight Dynamics Division
NASA Technical Reports Server (NTRS)
Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil
1997-01-01
The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.
49 CFR 40.251 - What are the first steps in an alcohol confirmation test?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What are the first steps in an alcohol... What are the first steps in an alcohol confirmation test? As the BAT for an alcohol confirmation test, you must follow these steps to begin the confirmation test process: (a) You must carry out a...
Sealed-bladdered chemical processing method and apparatus
Harless, D. Phillip
1999-01-01
A method and apparatus which enables a complete multi-stepped chemical treatment process to occur within a single, sealed-bladdered vessel 31. The entire chemical process occurs without interruption of the sealed-bladdered vessel 31 such as opening the sealed-bladdered vessel 31 between various steps of the process. The sealed-bladdered vessel 31 is loaded with a batch to be dissolved, treated, decanted, rinsed and/or dried. A pressure filtration step may also occur. The self-contained chemical processing apparatus 32 contains a sealed-bladder 32, a fluid pump 34, a reservoir 20, a compressed gas inlet, a vacuum pump 24, and a cold trap 23 as well as the associated piping 33, numerous valves 21,22,25,26,29,30,35,36 and other controls associated with such an apparatus. The claimed invention allows for dissolution and/or chemical treatment without the operator of the self-contained chemical processing apparatus 38 coming into contact with any of the process materials.
Processes for producing low cost, high efficiency silicon solar cells
Rohatgi, Ajeet; Chen, Zhizhang; Doshi, Parag
1996-01-01
Processes which utilize rapid thermal processing (RTP) are provided for inexpensively producing high efficiency silicon solar cells. The RTP processes preserve minority carrier bulk lifetime .tau. and permit selective adjustment of the depth of the diffused regions, including emitter and back surface field (bsf), within the silicon substrate. Silicon solar cell efficiencies of 16.9% have been achieved. In a first RTP process, an RTP step is utilized to simultaneously diffuse phosphorus and aluminum into the front and back surfaces, respectively, of a silicon substrate. Moreover, an in situ controlled cooling procedure preserves the carrier bulk lifetime .tau. and permits selective adjustment of the depth of the diffused regions. In a second RTP process, both simultaneous diffusion of the phosphorus and aluminum as well as annealing of the front and back contacts are accomplished during the RTP step. In a third RTP process, the RTP step accomplishes simultaneous diffusion of the phosphorus and aluminum, annealing of the contacts, and annealing of a double-layer antireflection/passivation coating SiN/SiO.sub.x.
Eureka: Six Easy Steps to Research Success
ERIC Educational Resources Information Center
Hubel, Joy Alter
2005-01-01
Eureka is similar to the Big6(super TM) research skills by Michael Eisenberg and Bob Berkowitz, as both methods simplify the complex process of critical information gathering into six user-friendly steps. The six research steps to Eureka are presented.
An Ecological Approach to Learning Dynamics
ERIC Educational Resources Information Center
Normak, Peeter; Pata, Kai; Kaipainen, Mauri
2012-01-01
New approaches to emergent learner-directed learning design can be strengthened with a theoretical framework that considers learning as a dynamic process. We propose an approach that models a learning process using a set of spatial concepts: learning space, position of a learner, niche, perspective, step, path, direction of a step and step…
Using Mixed Methods to Assess Initiatives with Broad-Based Goals
ERIC Educational Resources Information Center
Inkelas, Karen Kurotsuchi
2017-01-01
This chapter describes a process for assessing programmatic initiatives with broad-ranging goals with the use of a mixed-methods design. Using an example of a day-long teaching development conference, this chapter provides practitioners step-by-step guidance on how to implement this assessment process.
Using Institutional Survey Data to Jump-Start Your Benchmarking Process
ERIC Educational Resources Information Center
Chow, Timothy K. C.
2012-01-01
Guided by the missions and visions, higher education institutions utilize benchmarking processes to identify better and more efficient ways to carry out their operations. Aside from the initial planning and organization steps involved in benchmarking, a matching or selection step is crucial for identifying other institutions that have good…
Initial Crisis Reaction and Poliheuristic Theory
ERIC Educational Resources Information Center
DeRouen, Karl, Jr.; Sprecher, Christopher
2004-01-01
Poliheuristic (PH) theory models foreign policy decisions using a two-stage process. The first step eliminates alternatives on the basis of a simplifying heuristic. The second step involves a selection from among the remaining alternatives and can employ a more rational and compensatory means of processing information. The PH model posits that…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... appropriate steps during the manufacturing process to prevent cross-contamination of finished pharmaceuticals... implementing appropriate steps during the manufacturing process to prevent cross-contamination of finished... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2011-D-0104...
Strategic Marketing: The President's Perspective.
ERIC Educational Resources Information Center
Pappas, Richard J.; Shaink, M. Richard
1994-01-01
Provides a step-by-step guide to developing a college marketing plan. Identifying a target market and determining an appropriate mix of promotional strategies are considered key to the process. Highlights the college president's role in the marketing process, indicating that, although the president is the chief marketer, all employees must be…
Improving Program Performance through Management Information. A Workbook.
ERIC Educational Resources Information Center
Bienia, Nancy
Designed specifically for state and local managers and supervisors who plan, direct, and operate child support enforcement programs, this workbook provides a four-part, step-by-step process for identifying needed information and methods of using the information to operate an effective program. The process consists of: (1) determining what…
ERIC Educational Resources Information Center
Research for Better Schools, Inc., Philadelphia, PA.
The process for providing a "thorough and efficient" (T & E) education according to New Jersey statutes and regulations involves six basic steps. This document suggests procedures for handling the fifth step, educational program evaluation. Processes discussed include committee formation, evaluation planning, action plan…
Developing questionnaires for educational research: AMEE Guide No. 87
La Rochelle, Jeffrey S.; Dezee, Kent J.; Gehlbach, Hunter
2014-01-01
In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards. Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales. These seven steps do not address all aspects of survey design, nor do they represent the only way to develop a high-quality questionnaire. Instead, these steps synthesize multiple survey design techniques and organize them into a cohesive process for questionnaire developers of all levels. Addressing each of these steps systematically will improve the probabilities that survey designers will accurately measure what they intend to measure. PMID:24661014
Developing questionnaires for educational research: AMEE Guide No. 87.
Artino, Anthony R; La Rochelle, Jeffrey S; Dezee, Kent J; Gehlbach, Hunter
2014-06-01
In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards. Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales. These seven steps do not address all aspects of survey design, nor do they represent the only way to develop a high-quality questionnaire. Instead, these steps synthesize multiple survey design techniques and organize them into a cohesive process for questionnaire developers of all levels. Addressing each of these steps systematically will improve the probabilities that survey designers will accurately measure what they intend to measure.
Murav'ev, M I; Fomchenko, N V; Kondrat'eva, T V
2015-01-01
We examined the chemical leaching and biooxidation stages in a two-stage biooxidation process of an auriferous sulfide concentrate containing pyrrhotite, arsenopyrite and pyrite. Chemical leaching of the concentrate (slurry density at 200 g/L) by ferric sulfate biosolvent (initial concentration at 35.6 g/L), which was obtained by microbial oxidation of ferrous sulfate for 2 hours at 70°C at pH 1.4, was allowed to oxidize 20.4% ofarsenopyrite and 52.1% of sulfur. The most effective biooxidation of chemically leached concentrate was observed at 45°C in the presence of yeast extract. Oxidation of the sulfide concentrate in a two-step process proceeded more efficiently than in one-step. In a two-step mode, gold extraction from the precipitate was 10% higher and the content of elemental sulfur was two times lower than in a one-step process.
Processes to remove acid forming gases from exhaust gases
Chang, S.G.
1994-09-20
The present invention relates to a process for reducing the concentration of NO in a gas, which process comprises: (A) contacting a gas sample containing NO with a gaseous oxidizing agent to oxidize the NO to NO[sub 2]; (B) contacting the gas sample of step (A) comprising NO[sub 2] with an aqueous reagent of bisulfite/sulfite and a compound selected from urea, sulfamic acid, hydrazinium ion, hydrazoic acid, nitroaniline, sulfanilamide, sulfanilic acid, mercaptopropanoic acid, mercaptosuccinic acid, cysteine or combinations thereof at between about 0 and 100 C at a pH of between about 1 and 7 for between about 0.01 and 60 sec; and (C) optionally contacting the reaction product of step (A) with conventional chemical reagents to reduce the concentrations of the organic products of the reaction in step (B) to environmentally acceptable levels. Urea or sulfamic acid are preferred, especially sulfamic acid, and step (C) is not necessary or performed. 16 figs.
Evaluation of Vitrification Processing Step for Rocky Flats Incinerator Ash
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wigent, W.L.; Luey, J.K.; Scheele, R.D.
In 1997, Pacific Northwest National Laboratory (PNNL) staff developed a processing option for incinerator ash at the Rocky Flats Environmental Technology Sites (RFETS). This work was performed with support from Los Alamos National Laboratory (LANL) and Safe Sites of Colorado (SSOC). A description of the remediation needs for the RFETS incinerator ash is provided in a report summarizing the recommended processing option for treatment of the ash (Lucy et al. 1998). The recommended process flowsheet involves a calcination pretreatment step to remove carbonaceous material followed by a vitrification processing step for a mixture of glass tit and calcined incinerator ash.more » Using the calcination pretreatment step to remove carbonaceous material reduced process upsets for the vitrification step, allowed for increased waste loading in the final product, and improved the quality of the final product. Figure 1.1 illustrates the flow sheet for the recommended processing option for treatment of RFETS incinerator ash. In 1998, work at PNNL further developed the recommended flow sheet through a series of studies to better define the vitrification operating parameters and to address secondary processing issues (such as characterizing the offgas species from the calcination process). Because a prototypical rotary calciner was not available for use, studies to evaluate the offgas from the calcination process were performed using a benchtop rotary calciner and laboratory-scale equipment (Lucy et al. 1998). This report focuses on the vitrification process step after ash has been calcined. Testing with full-scale containers was performed using ash surrogates and a muffle furnace similar to that planned for use at RFETS. Small-scale testing was performed using plutonium-bearing incinerator ash to verify performance of the waste form. Ash was not obtained from RFETS because of transportation requirements to calcine the incinerator ash prior to shipment of the material. Because part of PNNL's work was to characterize the ash prior to calcination and to investigate the effect of calcination on product quality, representative material was obtained from LANL. Ash obtained from LANL was selected based on its similarity to that currently stored at RFETS. The plutonium-bearing ashes obtained from LANL are likely from a RFETS incinerator, but the exact origin was not identified.« less
Biodiesel production from waste frying oil using waste animal bone and solar heat.
Corro, Grisel; Sánchez, Nallely; Pal, Umapada; Bañuelos, Fortino
2016-01-01
A two-step catalytic process for the production of biodiesel from waste frying oil (WFO) at low cost, utilizing waste animal-bone as catalyst and solar radiation as heat source is reported in this work. In the first step, the free fatty acids (FFA) in WFO were esterified with methanol by a catalytic process using calcined waste animal-bone as catalyst, which remains active even after 10 esterification runs. The trans-esterification step was catalyzed by NaOH through thermal activation process. Produced biodiesel fulfills all the international requirements for its utilization as a fuel. A probable reaction mechanism for the esterification process is proposed considering the presence of hydroxyapatite at the surface of calcined animal bones. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ali Elsheikh, Yasir; Hassan Akhtar, Faheem
2014-01-01
Biodiesel was prepared from Citrullus colocynthis oil (CCO) via a two-step process. The first esterification step was explored in two ionic liquids (ILs) with 1,3-disulfonic acid imidazolium hydrogen sulfate (DSIMHSO4) and 3-methyl-1-sulfonic acid imidazolium hydrogen sulfate (MSIMHSO4). Both ILs appeared to be good candidates to replace hazardous acidic catalyst due to their exceptional properties. However, the two sulfonic chains existing in DSIMHSO4 were found to increase the acidity to the IL than the single sulfonic chain in MSIMHSO4. Based on the results, 3.6 wt% of DSIMHSO4, methanol/CCO molar ratio of 12 : 1, and 150°C offered a final FFA conversion of 95.4% within 105 min. A 98.2% was produced via second KOH-catalyzed step in 1.0%, 6 : 1 molar ratio, 600 rpm, and 60°C for 50 min. This new two-step catalyzed process could solve the corrosion and environmental problems associated with the current acidic catalysts. PMID:24987736
NASA Astrophysics Data System (ADS)
Yang, Jun; Wang, Ze-Xin; Lu, Sheng; Lv, Wei-gang; Jiang, Xi-zhi; Sun, Lei
2017-03-01
The micro-arc oxidation process was conducted on ZK60 Mg alloy under two and three steps voltage-increasing modes by DC pulse electrical source. The effect of each mode on current-time responses during MAO process and the coating characteristic were analysed and discussed systematically. The microstructure, thickness and corrosion resistance of MAO coatings were evaluated by scanning electron microscopy (SEM), energy disperse spectroscopy (EDS), microscope with super-depth of field and electrochemical impedance spectroscopy (EIS). The results indicate that two and three steps voltage-increasing modes can improve weak spark discharges with insufficient breakdown strength in later period during the MAO process. Due to higher value of voltage and voltage increment, the coating with maximum thickness of about 20.20μm formed under two steps voltage-increasing mode shows the best corrosion resistance. In addition, the coating fabricated under three steps voltage-increasing mode shows a smoother coating with better corrosion resistance due to the lower amplitude of voltage-increasing.
Advanced 3D image processing techniques for liver and hepatic tumor location and volumetry
NASA Astrophysics Data System (ADS)
Chemouny, Stephane; Joyeux, Henri; Masson, Bruno; Borne, Frederic; Jaeger, Marc; Monga, Olivier
1999-05-01
To assist radiologists and physicians in diagnosing, and in treatment planning and evaluating in liver oncology, we have developed a fast and accurate segmentation of the liver and its lesions within CT-scan exams. The first step of our method is to reduce spatial resolution of CT images. This will have two effects: obtain near isotropic 3D data space and drastically decrease computational time for further processing. On a second step a 3D non-linear `edge- preserving' smoothing filtering is performed throughout the entire exam. On a third step the 3D regions coming out from the second step are homogeneous enough to allow a quite simple segmentation process, based on morphological operations, under supervisor control, ending up with accurate 3D regions of interest (ROI) of the liver and all the hepatic tumors. On a fourth step the ROIs are eventually set back into the original images, features like volume and location are immediately computed and displayed. The segmentation we get is as precise as a manual one but is much faster.
Ouertani, Rachid; Hamdi, Abderrahmen; Amri, Chohdi; Khalifa, Marouan; Ezzaouia, Hatem
2014-01-01
In this work, we use a two-step metal-assisted chemical etching method to produce films of silicon nanowires shaped in micrograins from metallurgical-grade polycrystalline silicon powder. The first step is an electroless plating process where the powder was dipped for few minutes in an aqueous solution of silver nitrite and hydrofluoric acid to permit Ag plating of the Si micrograins. During the second step, corresponding to silicon dissolution, we add a small quantity of hydrogen peroxide to the plating solution and we leave the samples to be etched for three various duration (30, 60, and 90 min). We try elucidating the mechanisms leading to the formation of silver clusters and silicon nanowires obtained at the end of the silver plating step and the silver-assisted silicon dissolution step, respectively. Scanning electron microscopy (SEM) micrographs revealed that the processed Si micrograins were covered with densely packed films of self-organized silicon nanowires. Some of these nanowires stand vertically, and some others tilt to the silicon micrograin facets. The thickness of the nanowire films increases from 0.2 to 10 μm with increasing etching time. Based on SEM characterizations, laser scattering estimations, X-ray diffraction (XRD) patterns, and Raman spectroscopy, we present a correlative study dealing with the effect of the silver-assisted etching process on the morphological and structural properties of the processed silicon nanowire films.
2014-01-01
In this work, we use a two-step metal-assisted chemical etching method to produce films of silicon nanowires shaped in micrograins from metallurgical-grade polycrystalline silicon powder. The first step is an electroless plating process where the powder was dipped for few minutes in an aqueous solution of silver nitrite and hydrofluoric acid to permit Ag plating of the Si micrograins. During the second step, corresponding to silicon dissolution, we add a small quantity of hydrogen peroxide to the plating solution and we leave the samples to be etched for three various duration (30, 60, and 90 min). We try elucidating the mechanisms leading to the formation of silver clusters and silicon nanowires obtained at the end of the silver plating step and the silver-assisted silicon dissolution step, respectively. Scanning electron microscopy (SEM) micrographs revealed that the processed Si micrograins were covered with densely packed films of self-organized silicon nanowires. Some of these nanowires stand vertically, and some others tilt to the silicon micrograin facets. The thickness of the nanowire films increases from 0.2 to 10 μm with increasing etching time. Based on SEM characterizations, laser scattering estimations, X-ray diffraction (XRD) patterns, and Raman spectroscopy, we present a correlative study dealing with the effect of the silver-assisted etching process on the morphological and structural properties of the processed silicon nanowire films. PMID:25349554
Rhodes, Scott D; Mann-Jackson, Lilli; Alonzo, Jorge; Simán, Florence M; Vissman, Aaron T; Nall, Jennifer; Abraham, Claire; Aronson, Robert E; Tanner, Amanda E
2017-12-01
The science underlying the development of individual, community, system, and policy interventions designed to reduce health disparities has lagged behind other innovations. Few models, theoretical frameworks, or processes exist to guide intervention development. Our community-engaged research partnership has been developing, implementing, and evaluating efficacious interventions to reduce HIV disparities for over 15 years. Based on our intervention research experiences, we propose a novel 13-step process designed to demystify and guide intervention development. Our intervention development process includes steps such as establishing an intervention team to manage the details of intervention development; assessing community needs, priorities, and assets; generating intervention priorities; evaluating and incorporating theory; developing a conceptual or logic model; crafting activities; honing materials; administering a pilot, noting its process, and gathering feedback from all those involved; and editing the intervention based on what was learned. Here, we outline and describe each of these 13 steps.
Direct coal liquefaction process
Rindt, John R.; Hetland, Melanie D.
1993-01-01
An improved multistep liquefaction process for organic carbonaceous mater which produces a virtually completely solvent-soluble carbonaceous liquid product. The solubilized product may be more amenable to further processing than liquid products produced by current methods. In the initial processing step, the finely divided organic carbonaceous material is treated with a hydrocarbonaceous pasting solvent containing from 10% and 100% by weight process-derived phenolic species at a temperature within the range of 300.degree. C. to 400.degree. C. for typically from 2 minutes to 120 minutes in the presence of a carbon monoxide reductant and an optional hydrogen sulfide reaction promoter in an amount ranging from 0 to 10% by weight of the moisture- and ash-free organic carbonaceous material fed to the system. As a result, hydrogen is generated via the water/gas shift reaction at a rate necessary to prevent condensation reactions. In a second step, the reaction product of the first step is hydrogenated.
Direct coal liquefaction process
Rindt, J.R.; Hetland, M.D.
1993-10-26
An improved multistep liquefaction process for organic carbonaceous mater which produces a virtually completely solvent-soluble carbonaceous liquid product. The solubilized product may be more amenable to further processing than liquid products produced by current methods. In the initial processing step, the finely divided organic carbonaceous material is treated with a hydrocarbonaceous pasting solvent containing from 10% and 100% by weight process-derived phenolic species at a temperature within the range of 300 C to 400 C for typically from 2 minutes to 120 minutes in the presence of a carbon monoxide reductant and an optional hydrogen sulfide reaction promoter in an amount ranging from 0 to 10% by weight of the moisture- and ash-free organic carbonaceous material fed to the system. As a result, hydrogen is generated via the water/gas shift reaction at a rate necessary to prevent condensation reactions. In a second step, the reaction product of the first step is hydrogenated.
NASA Astrophysics Data System (ADS)
Sethuram, D.; Srisailam, Shravani; Rao Ponangi, Babu
2018-04-01
Austempered Ductile Iron(ADI) is an exciting alloy of iron which offers the design engineers the best combination high strength-to-weight ratio, low cost design flexibility, good toughness, wear resistance along with fatigue strength. The two step austempering procedure helps in simultaneously improving the tensile strength as-well as the ductility to more than that of the conventional austempering process. Extensive literature survey reveals that it’s mechanical and wear behaviour are dependent on heat treatment and alloy additions. Current work focuses on characterizing the two-step ADI samples (TSADI) developed by novel heat treatment process for resistance to corrosion and wear. The samples of Ductile Iron were austempered by the two-Step Austempering process at temperatures 300°C to 450°C in the steps of 50°C.Temperaturesare gradually increased at the rate of 14°C/Hour. In acidic medium (H2SO4), the austempered samples showed better corrosive resistance compared to conventional ductile iron. It has been observed from the wear studies that TSADI sample at 350°C is showing better wear resistance compared to ductile iron. The results are discussed in terms of fractographs, process variables and microstructural features of TSADI samples.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Antisymmetric vortex interactions in the wake behind a step cylinder
NASA Astrophysics Data System (ADS)
Tian, Cai; Jiang, Fengjian; Pettersen, Bjørnar; Andersson, Helge I.
2017-10-01
Flow around a step cylinder at the Reynolds number 150 was simulated by directly solving the full Navier-Stokes equations. The configuration was adopted from the work of Morton and Yarusevych ["Vortex shedding in the wake of a step cylinder," Phys. Fluids 22, 083602 (2010)], in which the wake dynamics were systematically described. A more detailed investigation of the vortex dislocation process has now been performed. Two kinds of new loop vortex structures were identified. Additionally, antisymmetric vortex interactions in two adjacent vortex dislocation processes were observed and explained. The results in this letter serve as a supplement for a more thorough understanding of the vortex dynamics in the step cylinder wake.
Terraforming - Making an earth of Mars
NASA Astrophysics Data System (ADS)
McKay, C. P.
1987-12-01
The possibility of creating a habitable environment on Mars via terraforming is discussed. The first step is to determine the amount, distribution, and chemical state of water, carbon dioxide, and nitrogen. The process of warming Mars and altering its atmosphere naturally divides into two steps: in the first step, the planet would be heated by a warm thick carbon dioxide atmosphere, while the second step would be to convert the atmospheric carbon dioxide and soil nitrates to the desired oxygen and nitrogen mixture. It is concluded that life will play a major role in any terraforming of Mars, and that terraforming will be a gradual evolutionary process duplicating the early evolution of life on earth.
Technology of welding aluminum alloys-II
NASA Technical Reports Server (NTRS)
1978-01-01
Step-by-step procedures were developed for high integrity manual and machine welding of aluminum alloys. Detailed instructions are given for each step with tables and graphs to specify materials and dimensions. Throughout work sequence, processing procedure designates manufacturing verification points and inspection points.
Lean methodology: supporting battlefield medical fitness by cutting process waste.
Huggins, Elaine J
2010-01-01
Healthcare has long looked at decreasing risk in communication and patient care processes. Increasing the simplicity in communication and patient care process is a newer concept contained in Lean methodology. Lean is a strategy for achieving improvement in performance through the elimination of steps that use resources without contributing to customer value. This is known as cutting waste or nonvalue added steps. This article outlines how the use of Lean improved a key process that supports battlefield medical fitness.
Computer Processing Of Tunable-Diode-Laser Spectra
NASA Technical Reports Server (NTRS)
May, Randy D.
1991-01-01
Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.
Wijmans, Johannes G.; Baker, Richard W.; Merkel, Timothy C.
2012-08-21
A gas separation process for treating flue gases from combustion processes, and combustion processes including such gas separation. The invention involves routing a first portion of the flue gas stream to be treated to an absorption-based carbon dioxide capture step, while simultaneously flowing a second portion of the flue gas across the feed side of a membrane, flowing a sweep gas stream, usually air, across the permeate side, then passing the permeate/sweep gas to the combustor.
Applications of process improvement techniques to improve workflow in abdominal imaging.
Tamm, Eric Peter
2016-03-01
Major changes in the management and funding of healthcare are underway that will markedly change the way radiology studies will be reimbursed. The result will be the need to deliver radiology services in a highly efficient manner while maintaining quality. The science of process improvement provides a practical approach to improve the processes utilized in radiology. This article will address in a step-by-step manner how to implement process improvement techniques to improve workflow in abdominal imaging.
Cherry recognition in natural environment based on the vision of picking robot
NASA Astrophysics Data System (ADS)
Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan
2017-04-01
In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.
Refining each process step to accelerate the development of biorefineries
Chandra, Richard P.; Ragauskas, Art J.
2016-06-21
Research over the past decade has been mainly focused on overcoming hurdles in the pretreatment, enzymatic hydrolysis, and fermentation steps of biochemical processing. Pretreatments have improved significantly in their ability to fractionate and recover the cellulose, hemicellulose, and lignin components of biomass while producing substrates containing carbohydrates that can be easily broken down by hydrolytic enzymes. There is a rapid movement towards pretreatment processes that incorporate mechanical treatments that make use of existing infrastructure in the pulp and paper industry, which has experienced a downturn in its traditional markets. Enzyme performance has also made great strides with breakthrough developments inmore » nonhydrolytic protein components, such as lytic polysaccharide monooxygenases, as well as the improvement of enzyme cocktails.The fermentability of pretreated and hydrolyzed sugar streams has been improved through strategies such as the use of reducing agents for detoxification, strain selection, and strain improvements. Although significant progress has been made, tremendous challenges still remain to advance each step of biochemical conversion, especially when processing woody biomass. In addition to technical and scale-up issues within each step of the bioconversion process, biomass feedstock supply and logistics challenges still remain at the forefront of biorefinery research.« less
The REAL process--a process for recycling sludge from water works.
Stendahl, K; Färm, C; Fritzdorf, H
2006-01-01
In order to produce drinking water, coagulants--such as aluminium salts--are widely used for precipitation and separation of impurities from raw water. The residual from the process is sludge, which presents a disposal problem. The REAL process is a method for recycling the aluminium from the sludge. In a first step, the aluminium hydroxide is dissolved in sulphuric acid. In a second step, an ultra filtration will separate all suspended matter and large molecules, leaving a concentrate of 15-20% dry solids. The permeate will contain the trivalent aluminium ions together with 30-50% of the organic contaminants. In a third step, by concentrating the permeate in a nano filter, the concentration of aluminium will be high enough to, in a fourth step, be precipitated with potassium sulphate to form a pure crystal: potassium aluminium sulphate. The potassium aluminium sulphate is comparable to standard aluminium sulphate. The process will give a residual in form of a concentrate from the ultra filtration, representing a few per cent of the incoming volume. This paper presents the results from a long time pilot-scale continuous test run at Västerås water works in Sweden, as well as calculations of costs for full-scale operations.
Pfeffer, Jan; Freund, Andreas; Bel-Rhlid, Rachid; Hansen, Carl-Erik; Reuss, Matthias; Schmid, Rolf D; Maurer, Steffen C
2007-10-01
We report here a two-step process for the high-yield enzymatic synthesis of 2-monoacylglycerides (2-MAG) of saturated as well as unsaturated fatty acids with different chain lengths. The process consists of two steps: first the unselective esterification of fatty acids and glycerol leading to a triacylglyceride followed by an sn1,3-selective alcoholysis reaction yielding 2-monoacylglycerides. Remarkably, both steps can be catalyzed by lipase B from Candida antarctica (CalB). The whole process including esterification and alcoholysis was scaled up in a miniplant to a total volume of 10 l. With this volume, a two-step process catalyzed by CalB for the synthesis of 1,3-oleoyl-2-palmitoylglycerol (OPO) using tripalmitate as starting material was established. On a laboratory scale, we obtained gram quantities of the synthesized 2-monoacylglycerides of polyunsaturated fatty acids such as arachidonic-, docosahexaenoic- and eicosapentaenoic acids and up to 96.4% of the theoretically possible yield with 95% purity. On a technical scale (>100 g of product, >5 l of reaction volume), 97% yield was reached in the esterification and 73% in the alcoholysis and a new promising process for the enzymatic synthesis of OPO was established.
Preparation of Rutile from Ilmenite Concentrate Through Pressure Leaching with Hydrochloric Acid
NASA Astrophysics Data System (ADS)
Xiang, Junyi; Liu, Songli; Lv, Xuewei; Bai, Chenguang
2017-04-01
Take into account the fact that the natural rutile utilized for the production of titanium dioxide pigment through chloride process is desperately lacking worldwide especially in China, an attempt was exploited for extracting synthetic rutile from Yunnan ilmenite concentrate with hydrochloric acid pressure leaching process. The leaching parameters for one step leaching process were investigated. The results shown that the optimum condition is leaching temperature of 413 K (140 °C), acid concentration of 20 pct HCl, leaching time of 4 hours and liquid/solid mass ratio of 8:1. A two steps leaching process was also suggested to reutilize the leaching liquor which with a high content of HCl. The results showed that the content of HCl decreased from 135 to 75 g/L, total iron increased from 44.5 g/L to about 87.6 g/L, and the liquid/solid mass ratio decreased to 5:1 with a two steps leaching process. The leaching product produced through a two steps leaching process shows a pure golden red with a high content of titanium (92.65 pct TiO2), a relatively low content of calcium (0.10 pct CaO) and magnesium (0.12 pct MgO), but high content of silicon (5.72 pct SiO2).
Ae Kim, Sun; Hong Park, Si; In Lee, Sang; Owens, Casey M.; Ricke, Steven C.
2017-01-01
The purpose of this study was to 1) identify microbial compositional changes on chicken carcasses during processing, 2) determine the antimicrobial efficacy of peracetic acid (PAA) and Amplon (blend of sulfuric acid and sodium sulfate) at a poultry processing pilot plant scale, and 3) compare microbial communities between chicken carcass rinsates and recovered bacteria from media. Birds were collected from each processing step and rinsates were applied to estimate aerobic plate count (APC) and Campylobacter as well as Salmonella prevalence. Microbiome sequencing was utilized to identify microbial population changes over processing and antimicrobial treatments. Only the PAA treatment exhibited significant reduction of APC at the post chilling step while both Amplon and PAA yielded detectable Campylobacter reductions at all steps. Based on microbiome sequencing, Firmicutes were the predominant bacterial group at the phyla level with over 50% frequency in all steps while the relative abundance of Proteobacteria decreased as processing progressed. Overall microbiota between rinsate and APC plate microbial populations revealed generally similar patterns at the phyla level but they were different at the genus level. Both antimicrobials appeared to be effective on reducing problematic bacteria and microbiome can be utilized to identify optimal indicator microorganisms for enhancing product quality. PMID:28230180
Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N
2014-06-20
The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Nie, Chunhong; Shao, Nan; Wang, Baohui; Yuan, Dandan; Sui, Xin; Wu, Hongjun
2016-07-01
The STEP (Solar Thermal Electrochemical Process) for Advanced Oxidation Processes (AOPs, combined to STEP-AOPs), fully driven by solar energy without the input of any other forms of energy and chemicals, is introduced and demonstrated from the theory to experiments. Exemplified by the persistent organic pollutant 2-nitrophenol in water, the fundamental model and practical system are exhibited for the STEP-AOPs to efficiently transform 2-nitrophenol into carbon dioxide, water, and the other substances. The results show that the STEP-AOPs system performs more effectively than classical AOPs in terms of the thermodynamics and kinetics of pollutant oxidation. Due to the combination of solar thermochemical reactions with electrochemistry, the STEP-AOPs system allows the requisite electrolysis voltage of 2-nitrophenol to be experimentally decreased from 1.00 V to 0.84 V, and the response current increases from 18 mA to 40 mA. STEP-AOPs also greatly improve the kinetics of the oxidation at 30 °C and 80 °C. As a result, the removal rate of 2-nitrophenol after 1 h increased from 19.50% at 30 °C to 32.70% at 80 °C at constant 1.90 V. Mechanistic analysis reveals that the oxidation pathway is favorably changed because of thermal effects. The tracking of the reaction displayed that benzenediol and hydroquinone are initial products, with maleic acid and formic acid as sequential carboxylic acid products, and carbon dioxide as the final product. The theory and experiments on STEP-AOPs system exemplified by the oxidation of 2-nitrophenol provide a broad basis for extension of the STEP and AOPs for rapid and efficient treatment of organic wastewater. Copyright © 2016 Elsevier Ltd. All rights reserved.
Distribution behaviour of acaricide cyflumetofen in tomato during home canning.
Liu, Na; Dong, Fengshou; Chen, Zenglong; Xu, Jun; Liu, Xingang; Duan, Lifang; Li, Minmin; Zheng, Yongquan
2016-05-01
The distribution behaviour of cyflumetofen in tomatoes during home canning was studied. The targeted compound cyflumetofen was determined by ultra-performance liquid chromatography coupled with tandem mass spectrometry (UPLC-MS/MS) after each process step, which included washing, peeling, homogenisation, simmering and sterilisation. Results indicated that more cyflumetofen was removed by washing with detergent solution compared with tap water, 2% NaCl solution and 2% CH3COOH solution. Peeling resulted in 90.2% loss of cyflumetofen and was the most effective step at removing pesticide residues from tomatoes. The processing factors (PFs) of tomato samples after each step were generally less than 1; in particular, the PF of the peeling process for cyflumetofen was 0.28.
Dual-step synthesis of 3-dimensional niobium oxide - Zinc oxide
NASA Astrophysics Data System (ADS)
Rani, Rozina Abdul; Zoolfakar, Ahmad Sabirin; Rusop, M.
2018-05-01
A facile fabrication process for constructing 3-dimensional (3D) structure of Niobium oxide - Zinc oxide (Nb2O5-ZnO) consisting of branched ZnO microrods on top of nanoporous Nb2O5 films was developed based on dual-step synthesis approach. The preliminary procedure was anodization of sputtered niobium metal on Fluorine doped Tin Oxide (FTO) to produce nanoporous Nb2O5, and continued with the growth of branched microrods of ZnO by hydrothermal process. This approach offers insight knowledge on the development of novel 3D metal oxide films via dual-step synthesis process, which might potentially use for multi-functional applications ranging from sensing to photoconversion.
Method to Improve Indium Bump Bonding via Indium Oxide Removal Using a Multi-Step Plasma Process
NASA Technical Reports Server (NTRS)
Dickie, Matthew R. (Inventor); Nikzad, Shouleh (Inventor); Greer, H. Frank (Inventor); Jones, Todd J. (Inventor); Vasquez, Richard P. (Inventor); Hoenk, Michael E. (Inventor)
2012-01-01
A process for removing indium oxide from indium bumps in a flip-chip structure to reduce contact resistance, by a multi-step plasma treatment. A first plasma treatment of the indium bumps with an argon, methane and hydrogen plasma reduces indium oxide, and a second plasma treatment with an argon and hydrogen plasma removes residual organics. The multi-step plasma process for removing indium oxide from the indium bumps is more effective in reducing the oxide, and yet does not require the use of halogens, does not change the bump morphology, does not attack the bond pad material or under-bump metallization layers, and creates no new mechanisms for open circuits.
Radically New Adsorption Cycles for Carbon Dioxide Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
James A. Ritter; Armin D. Ebner; James A. McIntyre
2005-10-11
In Parts I and II of this project, a rigorous pressure swing adsorption (PSA) process simulator was used to study new, high temperature, PSA cycles, based on the use of a K-promoted HTlc adsorbent and 4- and 5-step (bed) vacuum swing PSA cycles, which were designed to process a typical stack gas effluent at 575 K containing (in vol%) 15 % CO{sub 2}, 75% N{sub 2} and 10% H{sub 2}O into a light product stream depleted of CO{sub 2} and a heavy product stream enriched in CO{sub 2}. Literally, thousands (2,850) of simulations were carried out to the periodic statemore » to study the effects of the light product purge to feed ratio ({gamma}), cycle step time (t{sub s}) or cycle time (t{sub c}), high to low pressure ratio ({pi}{sub T}), and heavy product recycle ratio (R{sub R}) on the process performance, while changing the cycle configuration from 4- to 5-step (bed) designs utilizing combinations of light and heavy reflux steps, two different depressurization modes, and two sets of CO{sub 2}-HTlc mass transfer coefficients. The process performance was judged in terms of the CO{sub 2} purity and recovery, and the feed throughput. The best process performance was obtained from a 5-step (bed) stripping PSA cycle with a light reflux step and a heavy reflux step (with the heavy reflux gas obtained from the low pressure purge step), with a CO{sub 2} purity of 78.9%, a CO{sub 2} recovery of 57.4%, and a throughput of 11.5 L STP/hr/kg. This performance improved substantially when the CO{sub 2}-HTlc adsorption and desorption mass transfer coefficients (uncertain quantities at this time) were increased by factors of five, with a CO{sub 2} purity of 90.3%, a CO{sub 2} recovery of 73.6%, and a throughput of 34.6 L STP/hr/kg. Overall, this preliminary study disclosed the importance of cycle configuration through the heavy and dual reflux concepts, and the importance of knowing well defined mass transfer coefficients to the performance of a high temperature PSA process for CO{sub 2} capture and concentration from flue and stack gases using an HTlc adsorbent. This study is continuing.« less
A Unified Model of Cloud-to-Ground Lightning Stroke
NASA Astrophysics Data System (ADS)
Nag, A.; Rakov, V. A.
2014-12-01
The first stroke in a cloud-to-ground lightning discharge is thought to follow (or be initiated by) the preliminary breakdown process which often produces a train of relatively large microsecond-scale electric field pulses. This process is poorly understood and rarely modeled. Each lightning stroke is composed of a downward leader process and an upward return-stroke process, which are usually modeled separately. We present a unified engineering model for computing the electric field produced by a sequence of preliminary breakdown, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively-charged channel extends downward in a stepped fashion through the relatively-high-field region between the main negative and lower positive charge centers and then through the relatively-low-field region below the lower positive charge center. A relatively-high-field region is also assumed to exist near ground. The preliminary breakdown pulse train is assumed to be generated when the negatively-charged channel interacts with the lower positive charge region. At each step, an equivalent current source is activated at the lower extremity of the channel, resulting in a step current wave that propagates upward along the channel. The leader deposits net negative charge onto the channel. Once the stepped leader attaches to ground (upward connecting leader is presently neglected), an upward-propagating return stroke is initiated, which neutralizes the charge deposited by the leader along the channel. We examine the effect of various model parameters, such as step length and current propagation speed, on model-predicted electric fields. We also compare the computed fields with pertinent measurements available in the literature.
Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fenando
2003-01-01
In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.
Anokye, Nana Kwame; Pokhrel, Subhash; Buxton, Martin; Fox-Rushby, Julia
2013-06-01
Little is known about the correlates of meeting recommended levels of participation in physical activity (PA) and how this understanding informs public health policies on behaviour change. To analyse who meets the recommended level of participation in PA in males and females separately by applying 'process' modelling frameworks (single vs. sequential 2-step process). Using the Health Survey for England 2006, (n = 14 142; ≥ 16 years), gender-specific regression models were estimated using bivariate probit with selectivity correction and single probit models. A 'sequential, 2-step process' modelled participation and meeting the recommended level separately, whereas the 'single process' considered both participation and level together. In females, meeting the recommended level was associated with degree holders [Marginal effect (ME) = 0.013] and age (ME = -0.001), whereas in males, age was a significant correlate (ME = -0.003 to -0.004). The order of importance of correlates was similar across genders, with ethnicity being the most important correlate in both males (ME = -0.060) and females (ME = -0.133). In females, the 'sequential, 2-step process' performed better (ρ = -0.364, P < 0.001) than that in males (ρ = 0.154). The degree to which people undertake the recommended level of PA through vigorous activity varies between males and females, and the process that best predicts such decisions, i.e. whether it is a sequential, 2-step process or a single-step choice, is also different for males and females. Understanding this should help to identify subgroups that are less likely to meet the recommended level of PA (and hence more likely to benefit from any PA promotion intervention).
Implementation of Competency-Based Pharmacy Education (CBPE)
Koster, Andries; Schalekamp, Tom; Meijerman, Irma
2017-01-01
Implementation of competency-based pharmacy education (CBPE) is a time-consuming, complicated process, which requires agreement on the tasks of a pharmacist, commitment, institutional stability, and a goal-directed developmental perspective of all stakeholders involved. In this article the main steps in the development of a fully-developed competency-based pharmacy curriculum (bachelor, master) are described and tips are given for a successful implementation. After the choice for entering into CBPE is made and a competency framework is adopted (step 1), intended learning outcomes are defined (step 2), followed by analyzing the required developmental trajectory (step 3) and the selection of appropriate assessment methods (step 4). Designing the teaching-learning environment involves the selection of learning activities, student experiences, and instructional methods (step 5). Finally, an iterative process of evaluation and adjustment of individual courses, and the curriculum as a whole, is entered (step 6). Successful implementation of CBPE requires a system of effective quality management and continuous professional development as a teacher. In this article suggestions for the organization of CBPE and references to more detailed literature are given, hoping to facilitate the implementation of CBPE. PMID:28970422
NASA Astrophysics Data System (ADS)
Kim, Jin Seok; Hur, Min Young; Kim, Chang Ho; Kim, Ho Jun; Lee, Hae June
2018-03-01
A two-dimensional parallelized particle-in-cell simulation has been developed to simulate a capacitively coupled plasma reactor. The parallelization using graphics processing units is applied to resolve the heavy computational load. It is found that the step-ionization plays an important role in the intermediate gas pressure of a few Torr. Without the step-ionization, the average electron density decreases while the effective electron temperature increases with the increase of gas pressure at a fixed power. With the step-ionization, however, the average electron density increases while the effective electron temperature decreases with the increase of gas pressure. The cases with the step-ionization agree well with the tendency of experimental measurement. The electron energy distribution functions show that the population of electrons having intermediate energy from 4.2 to 12 eV is relaxed by the step-ionization. Also, it was observed that the power consumption by the electrons is increasing with the increase of gas pressure by the step-ionization process, while the power consumption by the ions decreases with the increase of gas pressure.
The effects of processing techniques on magnesium-based composite
NASA Astrophysics Data System (ADS)
Rodzi, Siti Nur Hazwani Mohamad; Zuhailawati, Hussain
2016-12-01
The aim of this study is to investigate the effect of processing techniques on the densification, hardness and compressive strength of Mg alloy and Mg-based composite for biomaterial application. The control sample (pure Mg) and Mg-based composite (Mg-Zn/HAp) were fabricated through mechanical alloying process using high energy planetary mill, whilst another Mg-Zn/HAp composite was fabricated through double step processing (the matrix Mg-Zn alloy was fabricated by planetary mill, subsequently HAp was dispersed by roll mill). As-milled powder was then consolidated by cold press into 10 mm diameter pellet under 400 MPa compaction pressure before being sintered at 300 °C for 1 hour under the flow of argon. The densification of the sintered pellets were then determined by Archimedes principle. Mechanical properties of the sintered pellets were characterized by microhardness and compression test. The results show that the density of the pellets was significantly increased by addition of HAp, but the most optimum density was observed when the sample was fabricated through double step processing (1.8046 g/cm3). Slight increment in hardness and ultimate compressive strength were observed for Mg-Zn/HAp composite that was fabricated through double step processing (58.09 HV, 132.19 MPa), as compared to Mg-Zn/HAp produced through single step processing (47.18 HV, 122.49 MPa).
Fuhrman, Susan I.; Redfern, Mark S.; Jennings, J. Richard; Perera, Subashan; Nebes, Robert D.; Furman, Joseph M.
2013-01-01
Postural dual-task studies have demonstrated effects of various executive function components on gait and postural control in older adults. The purpose of the study was to explore the role of inhibition during lateral step initiation. Forty older adults participated (range 70–94 yr). Subjects stepped to the left or right in response to congruous and incongruous visual cues that consisted of left and right arrows appearing on left or right sides of a monitor. The timing of postural adjustments was identified by inflection points in the vertical ground reaction forces (VGRF) measured separately under each foot. Step responses could be classified into preferred and nonpreferred step behavior based on the number of postural adjustments that were made. Delays in onset of the first postural adjustment (PA1) and liftoff (LO) of the step leg during preferred steps progressively increased among the simple, choice, congruous, and incongruous tasks, indicating interference in processing the relevant visuospatial cue. Incongruous cues induced subjects to make more postural adjustments than they typically would (i.e., nonpreferred steps), representing errors in selection of the appropriate motor program. During these nonpreferred steps, the onset of the PA1 was earlier than during the preferred steps, indicating a failure to inhibit an inappropriate initial postural adjustment. The functional consequence of the additional postural adjustments was a delay in the LO compared with steps in which they did not make an error. These results suggest that deficits in inhibitory function may detrimentally affect step decision processing, by delaying voluntary step responses. PMID:23114211
Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger
Mille, Marie‐Laure
2016-01-01
Abstract Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation‐induced steps that are triggered as fast as or faster than for younger adults. While age‐associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step‐triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event‐triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. PMID:26915664
Analysis, design, fabrication, and performance of three-dimensional braided composites
NASA Astrophysics Data System (ADS)
Kostar, Timothy D.
1998-11-01
Cartesian 3-D (track and column) braiding as a method of composite preforming has been investigated. A complete analysis of the process was conducted to understand the limitations and potentials of the process. Knowledge of the process was enhanced through development of a computer simulation, and it was discovered that individual control of each track and column and multiple-step braid cycles greatly increases possible braid architectures. Derived geometric constraints coupled with the fundamental principles of Cartesian braiding resulted in an algorithm to optimize preform geometry in relation to processing parameters. The design of complex and unusual 3-D braids was investigated in three parts: grouping of yarns to form hybrid composites via an iterative simulation; design of composite cross-sectional shape through implementation of the Universal Method; and a computer algorithm developed to determine the braid plan based on specified cross-sectional shape. Several 3-D braids, which are the result of variations or extensions to Cartesian braiding, are presented. An automated four-step braiding machine with axial yarn insertion has been constructed and used to fabricate two-step, double two-step, four-step, and four-step with axial and transverse yarn insertion braids. A working prototype of a multi-step braiding machine was used to fabricate four-step braids with surrogate material insertion, unique hybrid structures from multiple track and column displacement and multi-step cycles, and complex-shaped structures with constant or varying cross-sections. Braid materials include colored polyester yarn to study the yarn grouping phenomena, Kevlar, glass, and graphite for structural reinforcement, and polystyrene, silicone rubber, and fasteners for surrogate material insertion. A verification study for predicted yarn orientation and volume fraction was conducted, and a topological model of 3-D braids was developed. The solid model utilizes architectural parameters, generated from the process simulation, to determine the composite elastic properties. Methods of preform consolidation are investigated and the results documented. The extent of yarn deformation (packing) resulting from preform consolidation was investigated through cross-sectional micrographs. The fiber volume fraction of select hybrid composites was measured and representative unit cells are suggested. Finally, a comparison study of the elastic performance of Kevlar/epoxy and carbon/Kevlar hybrid composites was conducted.
NASA Astrophysics Data System (ADS)
Wallace, William; Miller, Jared; Diallo, Ahmed
2015-11-01
MultiPoint Thomson Scattering (MPTS) is an established, accurate method of finding the temperature, density, and pressure of a magnetically confined plasma. Two Nd:YAG (1064 nm) lasers are fired into the plasma with a effective frequency of 60 Hz, and the light is Doppler shifted by Thomson scattering. Polychromators on the NSTX-U midplane collect the scattered photons at various radii/scattering angles, and the avalanche photodiode voltages are saved to an MDSplus tree for later analysis. IDL code is then used to determine plasma temperature, pressure, and density from the captured polychromator measurements via Selden formulas. [1] Previous work [2] converted the single-processor IDL code into Python code, and prepared a new architecture for multiprocessing MPTS in parallel. However, that work was not completed to the generation of output data and curve fits that match with the previous IDL. This project refactored the Python code into a object-oriented architecture, and created a software test suite for the new architecture which allowed identification of the code which generated the difference in output. Another effort currently underway is to display the Thomson data in an intuitive, interactive format. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Community College Internship (CCI) program.
Multiphysics Application Coupling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Michael T.
2013-12-02
This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems;more » with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.« less
Re-Factoring Glycolytic Genes for Targeted Engineering of Catabolism in Gram-Negative Bacteria.
Sánchez-Pascuala, Alberto; Nikel, Pablo I; de Lorenzo, Víctor
2018-01-01
The Embden-Meyerhof-Parnas (EMP) pathway is widely accepted to be the biochemical standard of glucose catabolism. The well-characterized glycolytic route of Escherichia coli, based on the EMP catabolism, is an example of an intricate pathway in terms of genomic organization of the genes involved and patterns of gene expression and regulation. This intrinsic genetic and metabolic complexity renders it difficult to engineer glycolytic activities and transfer them onto other microbial cell factories, thus limiting the biotechnological potential of bacterial hosts that lack the route. Taking into account the potential applications of such a portable tool for targeted pathway engineering, in the present protocol we describe how the genes encoding all the enzymes of the linear EMP route have been individually recruited from the genome of E. coli K-12, edited in silico to remove their endogenous regulatory signals, and synthesized de novo following a standard (i.e., GlucoBrick) that facilitates their grouping in the form of functional modules that can be combined at the user's will. This novel genetic tool allows for the à la carte implementation or boosting of EMP pathway activities into different Gram-negative bacteria. The potential of the GlucoBrick platform is further illustrated by engineering novel glycolytic activities in the most representative members of the Pseudomonas genus (Pseudomonas putida and Pseudomonas aeruginosa).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeung, Yu-Hong; Pothen, Alex; Halappanavar, Mahantesh
We present an augmented matrix approach to update the solution to a linear system of equations when the coefficient matrix is modified by a few elements within a principal submatrix. This problem arises in the dynamic security analysis of a power grid, where operators need to performmore » $N-x$ contingency analysis, i.e., determine the state of the system when up to $x$ links from $N$ fail. Our algorithms augment the coefficient matrix to account for the changes in it, and then compute the solution to the augmented system without refactoring the modified matrix. We provide two algorithms, a direct method, and a hybrid direct-iterative method for solving the augmented system. We also exploit the sparsity of the matrices and vectors to accelerate the overall computation. Our algorithms are compared on three power grids with PARDISO, a parallel direct solver, and CHOLMOD, a direct solver with the ability to modify the Cholesky factors of the coefficient matrix. We show that our augmented algorithms outperform PARDISO (by two orders of magnitude), and CHOLMOD (by a factor of up to 5). Further, our algorithms scale better than CHOLMOD as the number of elements updated increases. The solutions are computed with high accuracy. Our algorithms are capable of computing $N-x$ contingency analysis on a $778K$ bus grid, updating a solution with $x=20$ elements in $$1.6 \\times 10^{-2}$$ seconds on an Intel Xeon processor.« less
An Exponential Regulator for Rapidity Divergences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ye; Neill, Duff; Zhu, Hua Xing
2016-04-01
Finding an efficient and compelling regularization of soft and collinear degrees of freedom at the same invariant mass scale, but separated in rapidity is a persistent problem in high-energy factorization. In the course of a calculation, one encounters divergences unregulated by dimensional regularization, often called rapidity divergences. Once regulated, a general framework exists for their renormalization, the rapidity renormalization group (RRG), leading to fully resummed calculations of transverse momentum (to the jet axis) sensitive quantities. We examine how this regularization can be implemented via a multi-differential factorization of the soft-collinear phase-space, leading to an (in principle) alternative non-perturbative regularization ofmore » rapidity divergences. As an example, we examine the fully-differential factorization of a color singlet's momentum spectrum in a hadron-hadron collision at threshold. We show how this factorization acts as a mother theory to both traditional threshold and transverse momentum resummation, recovering the classical results for both resummations. Examining the refactorization of the transverse momentum beam functions in the threshold region, we show that one can directly calculate the rapidity renormalized function, while shedding light on the structure of joint resummation. Finally, we show how using modern bootstrap techniques, the transverse momentum spectrum is determined by an expansion about the threshold factorization, leading to a viable higher loop scheme for calculating the relevant anomalous dimensions for the transverse momentum spectrum.« less
Training for Template Creation: A Performance Improvement Method
ERIC Educational Resources Information Center
Lyons, Paul
2008-01-01
Purpose: There are three purposes to this article: first, to offer a training approach to employee learning and performance improvement that makes use of a step-by-step process of skill/knowledge creation. The process offers follow-up opportunities for skill maintenance and improvement; second, to explain the conceptual bases of the approach; and…
Transforming Student Affairs Strategic Planning into Tangible Results
ERIC Educational Resources Information Center
Taylor, Simone Himbeault; Matney, Malinda M.
2007-01-01
The Division of Student Affairs at the University of Michigan has engaged in an iterative strategic process to create and implement a set of long-range goals. This strategic journey continues to evolve, bringing together the guiding framework of strategic planning steps, a reflective process with an assessment component within each step, and a…
Developing an Assessment of Learning Process: The Importance of Pre-Testing
ERIC Educational Resources Information Center
Sheran, Michelle; Sarbaum, Jeffrey
2012-01-01
Colleges and universities are increasingly being held accountable for assessing and reporting student learning. Recently there has been increased focus on using assessment to improve learning over time. In this paper we present a simple, step-by-step assessment process that will deliver meaningful results to achieve these ends. We emphasize the…
Bibliographic Instruction in a Step-by-Step Approach.
ERIC Educational Resources Information Center
Soash, Richard L.
1992-01-01
Describes an information search process based on Kuhlthau's model that was used to teach bibliographic research to ninth grade students. A research test to ensure that students are familiar with basic library skills is presented, forms for helping students narrow the topic and evaluate materials are provided, and a research process checklist is…
Fox Valley Technical College Quality First Process Model.
ERIC Educational Resources Information Center
Fox Valley Technical Coll., Appleton, WI.
An overview is provided of the Quality First Process Model developed by Fox Valley Technical College (FVTC), Wisconsin, to provide guidelines for quality instruction and service consistent with the highest educational standards. The 16-step model involves activities that should be adaptable to any organization. The steps of the quality model are…
How To Build a Strategic Plan: A Step-by-Step Guide for School Managers.
ERIC Educational Resources Information Center
Clay, Katherine; And Others
Strategic planning techniques for administrators, with a focus on process managers, are presented in this guidebook. The three major tasks of the strategic planning process include the assessment of the current organizational situation, goal setting, and the development of strategies to accomplish this. Strategic planning differs from long-range…
Will Microfilm and Computers Replace Clippings?
ERIC Educational Resources Information Center
Oppendahl, Alison; And Others
Four speeches are presented, each of which deals with the use of conputers to organize and retrieve news stories. The first speech relates in detail the step-by-step process devised by the "Free Press" in Detroit to analyze, categorize, code, film, process, and retrieve news stories through the use of the electronic film retrieval…
ERIC Educational Resources Information Center
Frazier, Thomas W.; Youngstrom, Eric A.
2006-01-01
In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…
A Virtual Environment for Process Management. A Step by Step Implementation
ERIC Educational Resources Information Center
Mayer, Sergio Valenzuela
2003-01-01
In this paper it is presented a virtual organizational environment, conceived with the integration of three computer programs: a manufacturing simulation package, an automation of businesses processes (workflows), and business intelligence (Balanced Scorecard) software. It was created as a supporting tool for teaching IE, its purpose is to give…
Teaching Statistics from the Operating Table: Minimally Invasive and Maximally Educational
ERIC Educational Resources Information Center
Nowacki, Amy S.
2015-01-01
Statistics courses that focus on data analysis in isolation, discounting the scientific inquiry process, may not motivate students to learn the subject. By involving students in other steps of the inquiry process, such as generating hypotheses and data, students may become more interested and vested in the analysis step. Additionally, such an…
Electronic-carrier-controlled photochemical etching process in semiconductor device fabrication
Ashby, C.I.H.; Myers, D.R.; Vook, F.L.
1988-06-16
An electronic-carrier-controlled photochemical etching process for carrying out patterning and selective removing of material in semiconductor device fabrication includes the steps of selective ion implanting, photochemical dry etching, and thermal annealing, in that order. In the selective ion implanting step, regions of the semiconductor material in a desired pattern are damaged and the remainder of the regions of the material not implanted are left undamaged. The rate of recombination of electrons and holes is increased in the damaged regions of the pattern compared to undamaged regions. In the photochemical dry etching step which follows ion implanting step, the material in the undamaged regions of the semiconductor are removed substantially faster than in the damaged regions representing the pattern, leaving the ion-implanted, damaged regions as raised surface structures on the semiconductor material. After completion of photochemical dry etching step, the thermal annealing step is used to restore the electrical conductivity of the damaged regions of the semiconductor material.
Electronic-carrier-controlled photochemical etching process in semiconductor device fabrication
Ashby, Carol I. H.; Myers, David R.; Vook, Frederick L.
1989-01-01
An electronic-carrier-controlled photochemical etching process for carrying out patterning and selective removing of material in semiconductor device fabrication includes the steps of selective ion implanting, photochemical dry etching, and thermal annealing, in that order. In the selective ion implanting step, regions of the semiconductor material in a desired pattern are damaged and the remainder of the regions of the material not implanted are left undamaged. The rate of recombination of electrons and holes is increased in the damaged regions of the pattern compared to undamaged regions. In the photochemical dry etching step which follows ion implanting step, the material in the undamaged regions of the semiconductor are removed substantially faster than in the damaged regions representing the pattern, leaving the ion-implanted, damaged regions as raised surface structures on the semiconductor material. After completion of photochemical dry etching step, the thermal annealing step is used to restore the electrical conductivity of the damaged regions of the semiconductor material.
Yuan, Yuan; Macquarrie, Duncan J
2015-12-01
The biorefinery is an important concept for the development of alternative routes to a range of interesting and important materials from renewable resources. It ensures that the resources are used fully and that all parts of them are valorized. This paper develops this concept, using brown macroalgae Ascophyllum nodosum as an example, by assistance of microwave technology. A step-by-step process was designed to obtain fucoidan, alginates, sugars and biochar (alga residue) consecutively. The yields of fucoidan, alginates, sugars and biochar were 14.09%, 18.24%, 10.87% and 21.44%, respectively. To make an evaluation of the biorefinery process, seaweed sample was also treated for fucoidan extraction only, alginate extraction only and hydrothermal treatment for sugars and biochar only. The chemical composition and properties of each product were also analyzed. The results indicated that A. nodosum could be potentially used as feedstock for a biorefinery process to produce valuable chemicals and fuels. Copyright © 2015 Elsevier Ltd. All rights reserved.
High-throughput screening of chromatographic separations: IV. Ion-exchange.
Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon
2008-08-01
Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.
Hydrothermal liquefaction pathways for low-nitrogen biocrude from wet algae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanzella, Francis; Lim, Jin-Ping
Our SRI International (SRI) team has developed a new two-step hydrothermal liquefaction (HTL) process to convert wet algal biomass into biocrude oil. The first step in the process (low-temperature HTL or HTL1) yields crude oil but, most importantly, it selectively dissolves nitrogen-containing compounds in the aqueous phase. Once the oil and the aqueous phase are separated, the low-nitrogen soft solids left behind can be taken to the second step (high-temperature HTL or HTL2) for full conversion to biocrude. HTL2 will hence yield low-nitrogen biocrude, which can be hydro-processed to yield transportation fuels. The expected high carbon yield and low nitrogenmore » content can lead to a transportation fuel from algae that avoids two problems common to existing algae-to-fuel processes: (1) poisoning of the hydro-processing catalyst; and (2) inefficient conversion of algae-to-liquid fuels. The process we studied would yield a new route to strategic energy production from domestic sources.« less
"2sDR": Process Development of a Sustainable Way to Recycle Steel Mill Dusts in the 21st Century
NASA Astrophysics Data System (ADS)
Rösler, Gernot; Pichler, Christoph; Antrekowitsch, Jürgen; Wegscheider, Stefan
2014-09-01
Significant amounts of electric arc furnace dust originating from steel production are recycled every year by the Waelz process, despite the fact that this type of process has several disadvantages. One alternative method would be the recovery of very high-quality ZnO as well as iron and even chromium in the two-step dust recycling process, which was invented to treat special waste for the recovery of heavy metal-containing residues. The big advantage of that process is that various types of residues, especially dusts, can be treated in an oxidizing first step for cleaning, with a subsequent reducing step for the metal recovery. After the treatment, three different fractions—dust, slag, and an iron alloy, can be used without any limitations. This study focuses on the development of the process along with some thermodynamic considerations. Moreover, a final overview of mass balances of an experiment performed in a 100-kg top blowing rotary converter with further developments is provided.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054
NASA Astrophysics Data System (ADS)
Pawar, V.; Weaver, C.; Jani, S.
2011-05-01
Zirconium and particularly Zr-2.5 wt%Nb (Zr2.5Nb) alloy are useful for engineering bearing applications because they can be oxidized in air to form a hard surface ceramic. Oxidized zirconium (OxZr) due to its abrasion resistant ceramic surface and biocompatible substrate alloy has been used as a bearing surface in total joint arthroplasty for several years. OxZr is characterized by hard zirconium oxide (oxide) formed on Zr2.5Nb using one step thermal oxidation carried out in air. Because the oxide is only at the surface, the bulk material behaves like a metal, with high toughness. The oxide, furthermore, exhibits high adhesion to the substrate because of an oxygen-rich diffusion hardened zone (DHZ) interposing between the oxide and the substrate. In this study, we demonstrate a two step process that forms a thicker DHZ and thus increased depth of hardening than that can be obtained using a one step oxidation process. The first step is thermal oxidation in air and the second step is a heat treatment in vacuum. The second step drives oxygen from the oxide formed in the first step deeper into the substrate to form a thicker DHZ. During the process only a portion of the oxide is dissolved. This new composition (DHOxZr) has approximately 4-6 μm oxide similar to that of OxZr. The nano-hardness of the oxide is similar but the DHZ is approximately 10 times thicker. The stoichiometry of the oxide is similar and a secondary phase rich in oxygen is present through the entire thickness. Due to the increased depth of hardening, the critical load required for the onset of oxide cracking is approximately 1.6 times more than that of the oxide of OxZr. This new composition has a potential to be used as a bearing surface in applications where greater depth of hardening is required.
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Elsawah, S.; Pierce, S. A.; Ames, D. P.
2016-12-01
The National Socio-Environmental Synthesis Center (SESYNC) Core Modelling Practices Pursuit is developing resources to describe core practices for developing and using models to support integrated water resource management. These practices implement specific steps in the modelling process with an interdisciplinary perspective; however, the particular practice that is most appropriate depends on contextual aspects specific to the project. The first task of the pursuit is to identify the various steps for which implementation practices are to be described. This paper reports on those results. The paper draws on knowledge from the modelling process literature for environmental modelling (Jakeman et al., 2006), engaging stakeholders (Voinov and Bousquet, 2010) and general modelling (Banks, 1999), as well as the experience of the consortium members. We organise the steps around the four modelling phases. The planning phase identifies what is to be achieved, how and with what resources. The model is built and tested during the construction phase, and then used in the application phase. Finally, models that become part of the ongoing policy process require a maintenance phase. For each step, the paper focusses on what is to be considered or achieved, rather than how it is performed. This reflects the separation of the steps from the practices that implement them in different contexts. We support description of steps with a wide range of examples. Examples are designed to be generic and do not reflect any one project or context, but instead are drawn from common situations or from extremely different ones so as to highlight some of the issues that may arise at each step. References Banks, J. (1999). Introduction to simulation. In Proceedings of the 1999 Winter Simulation Conference. Jakeman, A. J., R. A. Letcher, and J. P. Norton (2006). Ten iterative steps in development and evaluation of environmental models. Environmental Modelling and Software 21, 602-614. Voinov, A. and F. Bousquet (2010). Modelling with stakeholders. Environmental Modelling & Software 25 (11), 1268-1281.
Vorstius, Christian; Radach, Ralph; Lang, Alan R
2012-02-01
Reflexive and voluntary levels of processing have been studied extensively with respect to possible impairments due to alcohol intoxication. This study examined alcohol effects at the 'automated' level of processing essential to many complex visual processing tasks (e.g., reading, visual search) that involve ongoing modifications or reprogramming of well-practiced routines. Data from 30 participants (16 male) were collected in two counterbalanced sessions (alcohol vs. no-alcohol control; mean breath alcohol concentration = 68 mg/dL vs. 0 mg/dL). Eye movements were recorded during a double-step task where 75% of trials involved two target stimuli in rapid succession (inter-stimulus interval [ISI]=40, 70, or 100 ms) so that they could elicit two distinct saccades or eye movements (double steps). On 25% of trials a single target appeared. Results indicated that saccade latencies were longer under alcohol. In addition, the proportion of single-step responses and the mean saccade amplitude (length) of primary saccades decreased significantly with increasing ISI. The key novel finding, however, was that the reprogramming time needed to cancel the first saccade and adjust saccade amplitude was extended significantly by alcohol. The additional time made available by prolonged latencies due to alcohol was not utilized by the saccade programming system to decrease the number of two-step responses. These results represent the first demonstration of specific alcohol-induced programming deficits at the automated level of oculomotor processing.
Calle-Castañeda, Susana M; Márquez-Godoy, Marco A; Hernández-Ortiz, Juan P
2017-12-29
Phosphorus is an essential nutrient for the synthesis of biomolecules and is particularly important in agriculture, as soils must be constantly supplemented with its inorganic form to ensure high yields and productivity. In this paper, we propose a process to solubilize phosphorus from phosphate rocks, where Acidithiobacillus thiooxidans cultures are pre-cultivated to foster the acidic conditions for bioleaching-two-step "growing-then-recovery"-. Our method solubilizes 100% of phosphorus, whereas the traditional process without pre-cultivation-single-step "growing-and-recovery"-results in a maximum of 56% solubilization. As a proof of principle, we demonstrate that even at low concentrations of the phosphate rock, 1% w/v, the bacterial culture is unviable and biological activity is not observed during the single-step process. On the other hand, in our method, the bacteria are grown without the rock, ensuring high acid production. Once pH levels are below 0.7, the mineral is added to the culture, resulting in high yields of biological solubilization. According to the Fourier Transform Infrared Spectroscopy spectrums, gypsum is the dominant phosphate phase after both the single- and two-step methods. However, calcite and fluorapatite, dominant in the un-treated rock, are still present after the single-step, highlighting the differences between the chemical and the biological methods. Our process opens new avenues for biotechnologies to recover phosphorus in tropical soils and in low-grade phosphate rock reservoirs.
Barbas, J P; Leahy, T; Horta, A E; García-Herreros, M
2018-03-20
Sperm cryopreservation in goats has been a challenge for many years due to the detrimental effects of seminal plasma enzymes produced by the bulbo-urethral glands which catalyse the hydrolysis of lecithins in egg yolk to fatty acids and lysolecithins which are deleterious to spermatozoa. This fact implies to carry out additional processing steps during sperm cryopreservation for seminal plasma removal triggering different sperm responses which may affect sperm functionality. The objective of the present study was to determine specific sperm subpopulation responses in different handling steps during the cryopreservation process by using functional sperm kinematic descriptors in caprine ejaculates. Buck ejaculates (n = 40) were analysed for sperm concentration, viability, morphology and acrosome integrity. Moreover, sperm motility was assessed using a computer-assisted sperm analysis (CASA) system after five different handling steps (fresh sperm, 1st washing, 2nd washing, cooling and frozen-thawed sperm) during a standard cryopreservation protocol for goat semen. The results were analysed using Principal Component Analysis (PCA) and multivariate clustering procedures to establish the relationship between the distribution of the subpopulations found and the functional sperm motility in each step. Except for the 1st and 4th steps, four sperm kinematic subpopulations were observed explaining more than 75% of the variance. Based on velocity and linearity parameters and the subpopulations disclosed, the kinematic response varies among processing steps modifying sperm movement trajectories in a subpopulation-specific and handling step-dependent manner (p < 0.001). The predominant motile subpopulation in freshly ejaculated buck sperm had very fast velocity characteristics and a non-linear trajectory (41.1%). Washing buck sperm twice altered the subpopulation structure as well as cooling which resulted in a dramatic reduction in sperm velocities (p < 0.01). Frozen-thawed spermatozoa showed similar characteristics to cooled sperm except there was a further increase in linearity with a large proportion of sperm attributed to new slow, linear cluster (32.5%). In conclusion, this study confirms the variability and heterogeneity of goat sperm kinematic patterns throughout the cryopreservation process and suggests that the predominant motility pattern (assayed in vitro via CASA) of high quality spermatozoa might be typified by high speed and a non-linear trajectory. The relationships among the number and distribution of sperm subpopulations and the different handling steps were particularlly relevant, specially after the cooling and the post-thawing steps, when effects derived from these critical handling steps were evident and altered drastically the sperm motion patterns. Copyright © 2018 Elsevier Inc. All rights reserved.
Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.
Rogers, Mark W; Mille, Marie-Laure
2016-08-15
Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Organizational Factors and the Cancer Screening Process
Zapka, Jane; Edwards, Heather; Taplin, Stephen H.
2010-01-01
Cancer screening is a process of care consisting of several steps and interfaces. This article reviews what is known about the association between organizational factors and cancer screening rates and examines how organizational strategies can address the steps and interfaces of cancer screening in the context of both intraorganizational and interorganizational processes. We reviewed 79 studies assessing the relationship between organizational factors and cancer screening. Screening rates are largely driven by strategies to 1) limit the number of interfaces across organizational boundaries; 2) recruit patients, promote referrals, and facilitate appointment scheduling; and 3) promote continuous patient care. Optimal screening rates can be achieved when health-care organizations tailor strategies to the steps and interfaces in the cancer screening process that are most critical for their organizations, the providers who work within them, and the patients they serve. PMID:20386053
Rotor assembly and method for automatically processing liquids
Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.
1992-01-01
A rotor assembly for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water, includes a rotor body for rotation about an axis and including a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses.
Organizational factors and the cancer screening process.
Anhang Price, Rebecca; Zapka, Jane; Edwards, Heather; Taplin, Stephen H
2010-01-01
Cancer screening is a process of care consisting of several steps and interfaces. This article reviews what is known about the association between organizational factors and cancer screening rates and examines how organizational strategies can address the steps and interfaces of cancer screening in the context of both intraorganizational and interorganizational processes. We reviewed 79 studies assessing the relationship between organizational factors and cancer screening. Screening rates are largely driven by strategies to 1) limit the number of interfaces across organizational boundaries; 2) recruit patients, promote referrals, and facilitate appointment scheduling; and 3) promote continuous patient care. Optimal screening rates can be achieved when health-care organizations tailor strategies to the steps and interfaces in the cancer screening process that are most critical for their organizations, the providers who work within them, and the patients they serve.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-12-01
Search processes play key roles in various scientific fields. A widespread and effective search-process scheme, which we term Restart Search, is based on the following restart algorithm: i) set a timer and initiate a search task; ii) if the task was completed before the timer expired, then stop; iii) if the timer expired before the task was completed, then go back to the first step and restart the search process anew. In this paper a branching feature is added to the restart algorithm: at every transition from the algorithm's third step to its first step branching takes place, thus multiplying the search effort. This branching feature yields a search-process scheme which we term Branching Search. The running time of Branching Search is analyzed, closed-form results are established, and these results are compared to the coresponding running-time results of Restart Search.
Development of metallization process. FSA project, cell and module formation research area
NASA Technical Reports Server (NTRS)
Garcia, A., III
1984-01-01
New pastes were evaluated that contained additives to aid in the silicon-to-metallization contact. None were completely successful. A reevaluation of the molybdenum oxide paste and the two-step screen printing process was done. The oxide paste did not show promise. The two-step process enabled soldering of the cells but the cells still had a high series resistance. Pastes are on order from a different manufacturer.
ERIC Educational Resources Information Center
Peters, Erin
2005-01-01
Deconstructing cookbook labs to require the students to be more thoughtful could break down perceived teacher barriers to inquiry learning. Simple steps that remove or disrupt the direct transfer of step-by-step procedures in cookbook labs make students think more critically about their process. Through trials in the author's middle school…
Step-by-step: a model for practice-based learning.
Kane, Gabrielle M
2007-01-01
Innovative technology has led to high-precision radiation therapy that has dramatically altered the practice of radiation oncology. This qualitative study explored the implementation of this innovation into practice from the perspective of the practitioners in a large academic radiation medicine program and aimed to improve understanding of and facilitate the educational process of this change. Multiprofession staff participated in a series of seven focus groups and nine in-depth interviews, and the descriptive data from the transcripts were analyzed using grounded theory methodology. Practitioners believed that there had been a major effect on many aspects of their practice. The team structure supported the adoption of change. The technology changed the way the practices worked. Learning new skills increased workload and stress but led to a new conception of the discipline and the generation of new practice-based knowledge. When the concepts were examined longitudinally, a four-step process of learning was identified. In step 1, there was anxiety as staff acquired the skills to use the technology. Step 2 involved learning to interpret new findings and images, experiencing uncertainty until new perspectives developed. Step 3 involved questioning assumptions and critical reflection, which resulted in new understanding. The final step 4 identified a process of constructing new knowledge through research, development, and dialogue within the profession. These findings expand our understanding of how practice-based learning occurs in the context of change and can guide learning activities appropriate to each stage.
Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki
2015-02-15
Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Sánchez-Martín, J; Ghebremichael, K; Beltrán-Heredia, J
2010-08-01
The coagulant proteins from Moringa oleifera purified with single-step and two-step ion-exchange processes were used for the coagulation of surface water from Meuse river in The Netherlands. The performances of the two purified coagulants and the crude extract were assessed in terms of turbidity and DOC removal. The results indicated that the optimum dosage of the single-step purified coagulant was more than two times higher compared to the two-step purified coagulant in terms of turbidity removal. And the residual DOC in the two-step purified coagulant was lower than in single-step purified coagulant or crude extract. (c) 2010 Elsevier Ltd. All rights reserved.
Cross-current leaching of indium from end-of-life LCD panels.
Rocchetti, Laura; Amato, Alessia; Fonti, Viviana; Ubaldini, Stefano; De Michelis, Ida; Kopacek, Bernd; Vegliò, Francesco; Beolchini, Francesca
2015-08-01
Indium is a critical element mainly produced as a by-product of zinc mining, and it is largely used in the production process of liquid crystal display (LCD) panels. End-of-life LCDs represent a possible source of indium in the field of urban mining. In the present paper, we apply, for the first time, cross-current leaching to mobilize indium from end-of-life LCD panels. We carried out a series of treatments to leach indium. The best leaching conditions for indium were 2M sulfuric acid at 80°C for 10min, which allowed us to completely mobilize indium. Taking into account the low content of indium in end-of-life LCDs, of about 100ppm, a single step of leaching is not cost-effective. We tested 6 steps of cross-current leaching: in the first step indium leaching was complete, whereas in the second step it was in the range of 85-90%, and with 6 steps it was about 50-55%. Indium concentration in the leachate was about 35mg/L after the first step of leaching, almost 2-fold at the second step and about 3-fold at the fifth step. Then, we hypothesized to scale up the process of cross-current leaching up to 10 steps, followed by cementation with zinc to recover indium. In this simulation, the process of indium recovery was advantageous from an economic and environmental point of view. Indeed, cross-current leaching allowed to concentrate indium, save reagents, and reduce the emission of CO2 (with 10 steps we assessed that the emission of about 90kg CO2-Eq. could be avoided) thanks to the recovery of indium. This new strategy represents a useful approach for secondary production of indium from waste LCD panels. Copyright © 2015 Elsevier Ltd. All rights reserved.
van Limburg, Maarten; Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-08-13
It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed "basic stakeholder analysis," stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology.
Magnetoresistance measurement of permalloy thin film rings with triangular fins
NASA Astrophysics Data System (ADS)
Lai, Mei-Feng; Hsu, Chia-Jung; Liao, Chun-Neng; Chen, Ying-Jiun; Wei, Zung-Hang
2010-01-01
Magnetization reversals in permalloy rings controlled by nucleation sites using triangular fins at the same side and diagonal with respect to the field direction are demonstrated by magnetoresistance measurement and micromagnetic simulation. In the ring with triangular fins at the same side, there exists two-step reversal from onion to flux-closure state (or vortex state) and then from flux-closure (or vortex state) to reverse onion state; in the ring with diagonal triangular fins, one-step reversal occurs directly from onion to reverse onion state. The reversal processes are repeatable and controllable in contrast to an ideal ring without triangular fins where one-step and two-step reversals occur randomly in sweep-up and sweep-down processes.
Carbonation of metal silicates for long-term CO2 sequestration
Blencoe, James G; Palmer, Donald A; Anovitz, Lawrence M; Beard, James S
2014-03-18
In a preferred embodiment, the invention relates to a process of sequestering carbon dioxide. The process comprises the steps of: (a) reacting a metal silicate with a caustic alkali-metal hydroxide to produce a hydroxide of the metal formerly contained in the silicate; (b) reacting carbon dioxide with at least one of a caustic alkali-metal hydroxide and an alkali-metal silicate to produce at least one of an alkali-metal carbonate and an alkali-metal bicarbonate; and (c) reacting the metal hydroxide product of step (a) with at least one of the alkali-metal carbonate and the alkali-metal bicarbonate produced in step (b) to produce a carbonate of the metal formerly contained in the metal silicate of step (a).
Carbonation of metal silicates for long-term CO.sub.2 sequestration
Blencoe, James G [Harriman, TN; Palmer, Donald A [Oliver Springs, TN; Anovitz, Lawrence M [Knoxville, TN; Beard, James S [Martinsville, VA
2012-02-14
In a preferred embodiment, the invention relates to a process of sequestering carbon dioxide. The process comprises the steps of: (a) reacting a metal silicate with a caustic alkali-metal hydroxide to produce a hydroxide of the metal formerly contained in the silicate; (b) reacting carbon dioxide with at least one of a caustic alkali-metal hydroxide and an alkali-metal silicate to produce at least one of an alkali-metal carbonate and an alkali-metal bicarbonate; and (c) reacting the metal hydroxide product of step (a) with at least one of the alkali-metal carbonate and the alkali-metal bicarbonate produced in step (b) to produce a carbonate of the metal formerly contained in the metal silicate of step (a).
Algorithmic and heuristic processing of information by the nervous system.
Restian, A
1980-01-01
Starting from the fact that the nervous system must discover the information it needs, the author describes the way it decodes the received message. The logical circuits of the nervous system, submitting the received signals to a process by means of which information brought is discovered step by step, participates in decoding the message. The received signals, as information, can be algorithmically or heuristically processed. Algorithmic processing is done according to precise rules, which must be fulfilled step by step. By algorithmic processing, one develops somatic and vegetative reflexes as blood pressure, heart frequency or water metabolism control. When it does not dispose of precise rules of information processing or when algorithmic processing needs a very long time, the nervous system must use heuristic processing. This is the feature that differentiates the human brain from the electronic computer that can work only according to some extremely precise rules. The human brain can work according to less precise rules because it can resort to trial and error operations, and because it works according to a form of logic. Working with superior order signals which represent the class of all inferior type signals from which they begin, the human brain need not perform all the operations that it would have to perform by superior type of signals. Therefore the brain tries to submit the received signals to intensive as possible superization. All informational processing, and especially heuristical processing, is accompanied by a certain affective color and the brain cannot operate without it. Emotions, passions and sentiments usually complete the lack of precision of the heuristical programmes. Finally, the author shows that informational and especially heuristical processes study can contribute to a better understanding of the transition from neurological to psychological activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, S.; Yan, F.; Dorn, D.
2012-06-01
Photoluminescence (PL) imaging techniques can be applied to multicrystalline silicon wafers throughout the manufacturing process. Both band-to-band PL and defect-band emissions, which are longer-wavelength emissions from sub-bandgap transitions, are used to characterize wafer quality and defect content on starting multicrystalline silicon wafers and neighboring wafers processed at each step through completion of finished cells. Both PL imaging techniques spatially highlight defect regions that represent dislocations and defect clusters. The relative intensities of these imaged defect regions change with processing. Band-to-band PL on wafers in the later steps of processing shows good correlation to cell quality and performance. The defect bandmore » images show regions that change relative intensity through processing, and better correlation to cell efficiency and reverse-bias breakdown is more evident at the starting wafer stage as opposed to later process steps. We show that thermal processing in the 200 degrees - 400 degrees C range causes impurities to diffuse to different defect regions, changing their relative defect band emissions.« less
A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.
Richter, Mathis; Lins, Jonas; Schöner, Gregor
2017-01-01
Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition. Copyright © 2017 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
Perez, Susan L; Paterniti, Debora A; Wilson, Machelle; Bell, Robert A; Chan, Man Shan; Villareal, Chloe C; Nguyen, Hien Huy; Kravitz, Richard L
2015-07-20
Little is known about the processes people use to find health-related information on the Internet or the individual characteristics that shape selection of information-seeking approaches. Our aim was to describe the processes by which users navigate the Internet for information about a hypothetical acute illness and to identify individual characteristics predictive of their information-seeking strategies. Study participants were recruited from public settings and agencies. Interested individuals were screened for eligibility using an online questionnaire. Participants listened to one of two clinical scenarios—consistent with influenza or bacterial meningitis—and then conducted an Internet search. Screen-capture video software captured Internet search mouse clicks and keystrokes. Each step of the search was coded as hypothesis testing (etiology), evidence gathering (symptoms), or action/treatment seeking (behavior). The coded steps were used to form a step-by-step pattern of each participant's information-seeking process. A total of 78 Internet health information seekers ranging from 21-35 years of age and who experienced barriers to accessing health care services participated. We identified 27 unique patterns of information seeking, which were grouped into four overarching classifications based on the number of steps taken during the search, whether a pattern consisted of developing a hypothesis and exploring symptoms before ending the search or searching an action/treatment, and whether a pattern ended with action/treatment seeking. Applying dual-processing theory, we categorized the four overarching pattern classifications as either System 1 (41%, 32/78), unconscious, rapid, automatic, and high capacity processing; or System 2 (59%, 46/78), conscious, slow, and deliberative processing. Using multivariate regression, we found that System 2 processing was associated with higher education and younger age. We identified and classified two approaches to processing Internet health information. System 2 processing, a methodical approach, most resembles the strategies for information processing that have been found in other studies to be associated with higher-quality decisions. We conclude that the quality of Internet health-information seeking could be improved through consumer education on methodical Internet navigation strategies and the incorporation of decision aids into health information websites.
Paterniti, Debora A; Wilson, Machelle; Bell, Robert A; Chan, Man Shan; Villareal, Chloe C; Nguyen, Hien Huy; Kravitz, Richard L
2015-01-01
Background Little is known about the processes people use to find health-related information on the Internet or the individual characteristics that shape selection of information-seeking approaches. Objective Our aim was to describe the processes by which users navigate the Internet for information about a hypothetical acute illness and to identify individual characteristics predictive of their information-seeking strategies. Methods Study participants were recruited from public settings and agencies. Interested individuals were screened for eligibility using an online questionnaire. Participants listened to one of two clinical scenarios—consistent with influenza or bacterial meningitis—and then conducted an Internet search. Screen-capture video software captured Internet search mouse clicks and keystrokes. Each step of the search was coded as hypothesis testing (etiology), evidence gathering (symptoms), or action/treatment seeking (behavior). The coded steps were used to form a step-by-step pattern of each participant’s information-seeking process. A total of 78 Internet health information seekers ranging from 21-35 years of age and who experienced barriers to accessing health care services participated. Results We identified 27 unique patterns of information seeking, which were grouped into four overarching classifications based on the number of steps taken during the search, whether a pattern consisted of developing a hypothesis and exploring symptoms before ending the search or searching an action/treatment, and whether a pattern ended with action/treatment seeking. Applying dual-processing theory, we categorized the four overarching pattern classifications as either System 1 (41%, 32/78), unconscious, rapid, automatic, and high capacity processing; or System 2 (59%, 46/78), conscious, slow, and deliberative processing. Using multivariate regression, we found that System 2 processing was associated with higher education and younger age. Conclusions We identified and classified two approaches to processing Internet health information. System 2 processing, a methodical approach, most resembles the strategies for information processing that have been found in other studies to be associated with higher-quality decisions. We conclude that the quality of Internet health-information seeking could be improved through consumer education on methodical Internet navigation strategies and the incorporation of decision aids into health information websites. PMID:26194787
Persistent Step-Flow Growth of Strained Films on Vicinal Substrates
NASA Astrophysics Data System (ADS)
Hong, Wei; Lee, Ho Nyung; Yoon, Mina; Christen, Hans M.; Lowndes, Douglas H.; Suo, Zhigang; Zhang, Zhenyu
2005-08-01
We propose a model of persistent step flow, emphasizing dominant kinetic processes and strain effects. Within this model, we construct a morphological phase diagram, delineating a regime of step flow from regimes of step bunching and island formation. In particular, we predict the existence of concurrent step bunching and island formation, a new growth mode that competes with step flow for phase space, and show that the deposition flux and temperature must be chosen within a window in order to achieve persistent step flow. The model rationalizes the diverse growth modes observed in pulsed laser deposition of SrRuO3 on SrTiO3.
Digital enhancement of X-rays for NDT
NASA Technical Reports Server (NTRS)
Butterfield, R. L.
1980-01-01
Report is "cookbook" for digital processing of industrial X-rays. Computer techniques, previously used primarily in laboratory and developmental research, have been outlined and codified into step by step procedures for enhancing X-ray images. Those involved in nondestructive testing should find report valuable asset, particularly is visual inspection is method currently used to process X-ray images.
Preventing Dust Collection: Transforming Student Affairs Strategic Planning into Tangible Results
ERIC Educational Resources Information Center
Taylor, Simone Himbeault; Matney, Malinda M.
2007-01-01
The Division of Student Affairs at the University of Michigan in Ann Arbor engaged in an iterative strategic process to create and implement a set of long-range goals. This strategic journey continues to evolve, uniting a guiding framework of strategic planning steps, a reflective process with an assessment component within each step, and a group…
2014-06-01
and Coastal Data Information Program ( CDIP ). This User’s Guide includes step-by-step instructions for accessing the GLOS/GLCFS database via WaveNet...access, processing and analysis tool; part 3 – CDIP database. ERDC/CHL CHETN-xx-14. Vicksburg, MS: U.S. Army Engineer Research and Development Center
WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database
2015-02-01
Program ( CDIP ); and Part 4 for the Great Lakes Observing System/Coastal Forecasting System (GLOS/GLCFS). Using step-by-step instructions, this Part 5...Demirbilek, Z., L. Lin, and D. Wilson. 2014a. WaveNet: A web-based metocean data access, processing, and analysis tool; part 3– CDIP database
How to Develop Children as Researchers: A Step-by-Step Guide to Teaching the Research Process
ERIC Educational Resources Information Center
Kellett, Mary
2005-01-01
The importance of research in professional and personal development is increasingly being acknowledged. So why should children not benefit in a similar way? Traditionally, children have been excluded from this learning process because research methodology is considered too difficult for them. Principal obstacles focus around three key barriers:…
Selling the PSS in a School of Business: Relationship Selling in Practice
ERIC Educational Resources Information Center
Titus, David; Harris, Garth; Gulati, Rajesh; Bristow, Dennis
2017-01-01
This paper presents a step-by-step process for the development and implementation of a professional selling specialization program in the marketing curriculum of a school of business at an AACSB accredited state university. The program is presented in detail along with the process followed in order to develop support for the program with three…
Abstract:This case study application provides discussion on a selected application of advanced concepts, included in the End of Asset Life Reinvestment decision-making process tool, using a utility practitioner’s data set. The tool provides step-by-step process guidance to the as...
Facilitating the Research Paper Process: A Guide for the Social Science Instructor.
ERIC Educational Resources Information Center
Medina, Suzanne L.
This paper describes the approach used successfully at California State University, Dominguez Hills, to instruct college students in the research paper writing process. To achieve the results, the instructor followed a specific set of steps during a class meeting set aside for this specialized training. This paper details each step in the…
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Weinbaum, Rebecca K.
2016-01-01
Recently, several authors have attempted to make the literature review process more transparent by providing a step-by-step guide to conducting literature reviews. However, although these works are very informative, none of them delineate how to display information extracted from literature reviews in a reader-friendly and visually appealing…
A Seven-Step Process To Align Curriculum with Oregon State Content Standards.
ERIC Educational Resources Information Center
Golden, Nancy; Lane, Marilyn
1998-01-01
The University of Oregon (UO) and Captain Robert Gray Elementary School formed a partnership where UO students used the elementary school as a case study for curriculum research. This document gives an overview of the 7-step process the students used to align the school's curriculum with Oregon's content and performance standards. The text opens…
ERIC Educational Resources Information Center
McAliney, Peter J.
2009-01-01
This article presents a process for valuing a portfolio of learning assets used by line executives across industries to value traditional business assets. Embedded within the context of enterprise risk management, this strategic asset allocation process is presented step by step, providing readers the operational considerations to implement this…
The Rhetorical Cycle: Reading, Thinking, Speaking, Listening, Discussing, Writing.
ERIC Educational Resources Information Center
Keller, Rodney D.
The rhetorical cycle is a step-by-step approach that provides classroom experience before students actually write, thereby making the writing process less frustrating for them. This approach consists of six sequential steps: reading, thinking, speaking, listening, discussing, and finally writing. Readings serve not only as models of rhetorical…
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Steps in Performing a Communication Audit.
ERIC Educational Resources Information Center
Sincoff, Michael Z.; And Others
This paper develops the step-by-step processes necessary to conduct a communication audit in order to determine the communication effectiveness of an organization. The authors stress the responsibilities of both the audit team and the organization's top management as they interact during progressive phases of the audit. Emphasis is placed on…
Some Key Factors in Policy Implementation.
ERIC Educational Resources Information Center
Rowen, Henry
Business policy texts identify numerous steps that make up the policy implementation process for private firms. On the surface, these steps also appear applicable to the implementation of public policies. However, the problems of carrying out these implementing steps in the public sector are significantly different than in the private sector due…
A Selection Method That Succeeds!
ERIC Educational Resources Information Center
Weitman, Catheryn J.
Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…
Steps in the open space planning process
Stephanie B. Kelly; Melissa M. Ryan
1995-01-01
This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.
Disciplinary Counseling: The First Step toward Due Process.
ERIC Educational Resources Information Center
Cunningham, Patrick J.
1980-01-01
The oral reprimand is seen as the most important step in a corrective discipline procedure. Steps of disciplinary counseling include: always counsel in a private place; identify the problem; identify the desired behavior; define the consequences; get commitment from employee; identify session as oral reprimand; and monitor and follow up.(MLW)
Processing-Related Issues for the Design and Lifing of SiC/SiC Hot-Section Components
NASA Technical Reports Server (NTRS)
DiCarlo, J.; Bhatt, R.; Morscher, G.; Yun, H. M.
2006-01-01
For successful SiC/SiC engine components, numerous process steps related to the fiber, fiber architecture, interphase coating, and matrix need to be optimized. Under recent NASA-sponsored programs, it was determined that many of these steps in their initial approach were inadequate, resulting in less than optimum thermostructural and life properties for the as-fabricated components. This presentation will briefly review many of these process issues, the key composite properties they degrade, their underlying mechanisms, and current process remedies developed by NASA and others.
Pre-Finishing of SiC for Optical Applications
NASA Technical Reports Server (NTRS)
Rozzi, Jay; Clavier, Odile; Gagne, John
2011-01-01
13 Manufacturing & Prototyping A method is based on two unique processing steps that are both based on deterministic machining processes using a single-point diamond turning (SPDT) machine. In the first step, a high-MRR (material removal rate) process is used to machine the part within several microns of the final geometry. In the second step, a low-MRR process is used to machine the part to near optical quality using a novel ductile regime machining (DRM) process. DRM is a deterministic machining process associated with conditions under high hydrostatic pressures and very small depths of cut. Under such conditions, using high negative-rake angle cutting tools, the high-pressure region near the tool corresponds to a plastic zone, where even a brittle material will behave in a ductile manner. In the high-MRR processing step, the objective is to remove material with a sufficiently high rate such that the process is economical, without inducing large-scale subsurface damage. A laser-assisted machining approach was evaluated whereby a CO2 laser was focused in advance of the cutting tool. While CVD (chemical vapor deposition) SiC was successfully machined with this approach, the cutting forces were substantially higher than cuts at room temperature under the same machining conditions. During the experiments, the expansion of the part and the tool due to the heating was carefully accounted for. The higher cutting forces are most likely due to a small reduction in the shear strength of the material compared with a larger increase in friction forces due to the thermal softening effect. The key advantage is that the hybrid machine approach has the potential to achieve optical quality without the need for a separate optical finishing step. Also, this method is scalable, so one can easily progress from machining 50-mm-diameter samples to the 250-mm-diameter mirror that NASA desires.
Gallium arsenide processing for gate array logic
NASA Technical Reports Server (NTRS)
Cole, Eric D.
1989-01-01
The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.
Miyata, Kazuki; Tracey, John; Miyazawa, Keisuke; Haapasilta, Ville; Spijker, Peter; Kawagoe, Yuta; Foster, Adam S; Tsukamoto, Katsuo; Fukuma, Takeshi
2017-07-12
The microscopic understanding of the crystal growth and dissolution processes have been greatly advanced by the direct imaging of nanoscale step flows by atomic force microscopy (AFM), optical interferometry, and X-ray microscopy. However, one of the most fundamental events that govern their kinetics, namely, atomistic events at the step edges, have not been well understood. In this study, we have developed high-speed frequency modulation AFM (FM-AFM) and enabled true atomic-resolution imaging in liquid at ∼1 s/frame, which is ∼50 times faster than the conventional FM-AFM. With the developed AFM, we have directly imaged subnanometer-scale surface structures around the moving step edges of calcite during its dissolution in water. The obtained images reveal that the transition region with typical width of a few nanometers is formed along the step edges. Building upon insight in previous studies, our simulations suggest that the transition region is most likely to be a Ca(OH) 2 monolayer formed as an intermediate state in the dissolution process. On the basis of this finding, we improve our understanding of the atomistic dissolution model of calcite in water. These results open up a wide range of future applications of the high-speed FM-AFM to the studies on various dynamic processes at solid-liquid interfaces with true atomic resolution.
Koken, Juline A.; Naar-King, Sylvie; Umasa, Sanya; Parsons, Jeffrey T.; Saengcharnchai, Pichai; Phanuphak, Praphan; Rongkavilit, Chokechai
2013-01-01
The provision of culturally relevant yet evidence-based interventions has become crucial to global HIV prevention and treatment efforts. In Thailand, where treatment for HIV has become widely available, medication adherence and risk behaviors remain an issue for Thai youth living with HIV. Previous research on motivational interviewing (MI) has proven effective in promoting medication adherence and HIV risk reduction in the United States. However, to test the efficacy of MI in the Thai context a feasible method for monitoring treatment fidelity must be implemented. This article describes a collaborative three-step process model for implementing the MI Treatment Integrity (MITI) across cultures while identifying linguistic issues that the English-originated MITI was not designed to detect as part of a larger intervention for Thai youth living with HIV. Step 1 describes the training of the Thai MITI coder, Step 2 describes identifying cultural and linguistic issues unique to the Thai context, and Step 3 describes an MITI booster training and incorporation of the MITI feedback into supervision and team discussion. Throughout the process the research team collaborated to implement the MITI while creating additional ways to evaluate in-session processes that the MITI is not designed to detect. The feasibility of using the MITI as a measure of treatment fidelity for MI delivered in the Thai linguistic and cultural context is discussed. PMID:22228776
Role of the Heat Sink Layer Ta for Ultrafast Spin Dynamic Process in Amorphous TbFeCo Thin Films
NASA Astrophysics Data System (ADS)
Ren, Y.; Zhang, Z. Z.; Min, T.; Jin, Q. Y.
The ultrafast demagnetization processes (UDP) in Ta (t nm)/TbFeCo (20 nm) films have been studied using the time-resolved magneto-optical Kerr effect (TRMOKE). With a fixed pump fluence of 2 mJ/cm2, for the sample without a Ta underlayer (t=0nm), we observed the UDP showing a two-step decay behavior, with a relatively longer decay time (τ2) around 3.0 ps in the second step due to the equilibrium of spin-lattice relaxation following the 4f occupation. As a 10nm Ta layer is deposited, the two-step demagnetization still exists while τ2 decreases to ˜1.9ps. Nevertheless, the second-step decay (τ2=0ps) disappears as the Ta layer thickness is increased up to 20 nm, only the first-step UDP occurs within 500 fs, followed by a fast recovery process. The rapid magnetization recovery rate strongly depends on the pump fluence. We infer that the Ta layer provides conduction electrons involving the thermal equilibrium of spin-lattice interaction and serves as heat bath taking away energy from spins of TbFeCo alloy film in UDP.
Process for Mapping Global Health Competencies in Undergraduate and Graduate Nursing Curricula.
Dawson, Martha; Gakumo, C Ann; Phillips, Jennan; Wilson, Lynda
2016-01-01
Determining the extent to which existing nursing curricula prepare students to address global health issues is a critical step toward ensuring competence to practice in an increasingly globalized world. This article describes the process used by nursing faculty at a public university in the southern United States to assess the extent to which global health competencies for nurses were being addressed across nursing programs. Steps used and lessons learned throughout this process are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toba Y.; Yagi, K.
1984-01-01
Drastic changes of (p,t) analyzing powers for the four Ni isotopes in ground-state transitions were observed. The changes are not explained by direct one-step processes but are interpreted by including strong two-step (p,d) (d,t) processes. Interference between the two processes of comparable intensities is essential. Marked incident-energy dependence of the analyzing powers is interpreted similarly.
Zero Liquid Discharge (ZLD) System for Flue-Gas Derived Water From Oxy-Combustion Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivaram Harendra; Danylo Oryshchyn; Thomas Ochs
2011-10-16
Researchers at the National Energy Technology Laboratory (NETL) located in Albany, Oregon, have patented a process - Integrated Pollutant Removal (IPR) that uses off-the-shelf technology to produce a sequestration ready CO{sub 2} stream from an oxy-combustion power plant. Capturing CO{sub 2} from fossil-fuel combustion generates a significant water product which can be tapped for use in the power plant and its peripherals. Water condensed in the IPR{reg_sign} process may contain fly ash particles, sodium (from pH control), and sulfur species, as well as heavy metals, cations and anions. NETL is developing a treatment approach for zero liquid discharge while maximizingmore » available heat from IPR. Current treatment-process steps being studied are flocculation/coagulation, for removal of cations and fine particles, and reverse osmosis, for anion removal as well as for scavenging the remaining cations. After reverse osmosis process steps, thermal evaporation and crystallization steps will be carried out in order to build the whole zero liquid discharge (ZLD) system for flue-gas condensed wastewater. Gypsum is the major product from crystallization process. Fast, in-line treatment of water for re-use in IPR seems to be one practical step for minimizing water treatment requirements for CO{sub 2} capture. The results obtained from above experiments are being used to build water treatment models.« less
Development and evaluation of polyvinyl-alcohol blend polymer films as battery separators
NASA Technical Reports Server (NTRS)
Manzo, M. A.
1982-01-01
Several dialdehydes and epoxies were evaluated for their suitability as cross-linkers. Optium concentrations of several cross-linking reagents were determined. A two-step method of cross-linking, which involves treatment of the film in an acid or acid periodate bath, was investigated and dropped in favor of a one-step method in which the acid catalyst, which initiates cross-linking, is added to the PVA - cross-linker solution before casting. The cross-linking was thus achieved during the drying step. This one-step method was much more adaptable to commercial processing. Cross-linked films were characterized as alkaline battery separators. Films were prepared in the lab and tested in cells in order to evaluate the effect of film composition and a number of processing parameters on cell performance. These tests were conducted in order to provide a broader data base from which to select optimum processing parameters. Results of the separator screening tests and the cell tests are discussed.
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Processes to remove acid forming gases from exhaust gases
Chang, Shih-Ger
1994-01-01
The present invention relates to a process for reducing the concentration of NO in a gas, which process comprises: (A) contacting a gas sample containing NO with a gaseous oxidizing agent to oxidize the NO to NO.sub.2 ; (B) contacting the gas sample of step (A) comprising NO.sub.2 with an aqueous reagent of bisulfite/sulfite and a compound selected from urea, sulfamic acid, hydrazinium ion, hydrazoic acid, nitroaniline, sulfanilamide, sulfanilic acid, mercaptopropanoic acid, mercaptosuccinic acid, cysteine or combinations thereof at between about 0.degree. and 100.degree. C. at a pH of between about 1 and 7 for between about 0.01 and 60 sec; and (C) optionally contacting the reaction product of step (A) with conventional chemical reagents to reduce the concentrations of the organic products of the reaction in step (B) to environ-mentally acceptable levels. Urea or sulfamic acid are preferred, especially sulfamic acid, and step (C) is not necessary or performed.
Macro-fingerprint analysis-through-separation of licorice based on FT-IR and 2DCOS-IR
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Ping; Xu, Changhua; Yang, Yan; Li, Jin; Chen, Tao; Li, Zheng; Cui, Weili; Zhou, Qun; Sun, Suqin; Li, Huifen
2014-07-01
In this paper, a step-by-step analysis-through-separation method under the navigation of multi-step IR macro-fingerprint (FT-IR integrated with second derivative IR (SD-IR) and 2DCOS-IR) was developed for comprehensively characterizing the hierarchical chemical fingerprints of licorice from entirety to single active components. Subsequently, the chemical profile variation rules of three parts (flavonoids, saponins and saccharides) in the separation process were holistically revealed and the number of matching peaks and correlation coefficients with standards of pure compounds was increasing along the extracting directions. The findings were supported by UPLC results and a verification experiment of aqueous separation process. It has been demonstrated that the developed multi-step IR macro-fingerprint analysis-through-separation approach could be a rapid, effective and integrated method not only for objectively providing comprehensive chemical characterization of licorice and all its separated parts, but also for rapidly revealing the global enrichment trend of the active components in licorice separation process.
Processing method for forming dislocation-free SOI and other materials for semiconductor use
Holland, Orin Wayne; Thomas, Darrell Keith; Zhou, Dashun
1997-01-01
A method for preparing a silicon-on-insulator material having a relatively defect-free Si overlayer involves the implanting of oxygen ions within a silicon body and the interruption of the oxygen-implanting step to implant Si ions within the silicon body. The implanting of the oxygen ions develops an oxide layer beneath the surface of the silicon body, and the Si ions introduced by the Si ion-implanting step relieves strain which is developed in the Si overlayer during the implanting step without the need for any intervening annealing step. By relieving the strain in this manner, the likelihood of the formation of strain-induced defects in the Si overlayer is reduced. In addition, the method can be carried out at lower processing temperatures than have heretofore been used with SIMOX processes of the prior art. The principles of the invention can also be used to relieve negative strain which has been induced in a silicon body of relatively ordered lattice structure.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, John L.
1990-01-01
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, J.L.
1990-07-10
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.
Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture.
Kou, Wen; Kou, Shaoquan; Liu, Hongyuan; Sjögren, Göran
2007-08-01
The main objectives were to examine the fracture mechanism and process of a ceramic fixed partial denture (FPD) framework under simulated mechanical loading using a recently developed numerical modeling code, the R-T(2D) code, and also to evaluate the suitability of R-T(2D) code as a tool for this purpose. Using the recently developed R-T(2D) code the fracture mechanism and process of a 3U yttria-tetragonal zirconia polycrystal ceramic (Y-TZP) FPD framework was simulated under static loading. In addition, the fracture pattern obtained using the numerical simulation was compared with the fracture pattern obtained in a previous laboratory test. The result revealed that the framework fracture pattern obtained using the numerical simulation agreed with that observed in a previous laboratory test. Quasi-photoelastic stress fringe pattern and acoustic emission showed that the fracture mechanism was tensile failure and that the crack started at the lower boundary of the framework. The fracture process could be followed both in step-by-step and step-in-step. Based on the findings in the current study, the R-T(2D) code seems suitable for use as a complement to other tests and clinical observations in studying stress distribution, fracture mechanism and fracture processes in ceramic FPD frameworks.
NASA Astrophysics Data System (ADS)
Kunimura, Shinsuke; Ohmori, Hitoshi
We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance.
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents.
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents. PMID:27525414
NASA Astrophysics Data System (ADS)
Prete, Antonio Del; Franchi, Rodolfo; Antermite, Fabrizio; Donatiello, Iolanda
2018-05-01
Residual stresses appear in a component as a consequence of thermo-mechanical processes (e.g. ring rolling process) casting and heat treatments. When machining these kinds of components, distortions arise due to the redistribution of residual stresses due to the foregoing process history inside the material. If distortions are excessive, they can lead to a large number of scrap parts. Since dimensional accuracy can affect directly the engines efficiency, the dimensional control for aerospace components is a non-trivial issue. In this paper, the problem related to the distortions of large thin walled aeroengines components in nickel superalloys has been addressed. In order to estimate distortions on inner diameters after internal turning operations, a 3D Finite Element Method (FEM) analysis has been developed on a real industrial test case. All the process history, has been taken into account by developing FEM models of ring rolling process and heat treatments. Three different strategies of ring rolling process have been studied and the combination of related parameters which allows to obtain the best dimensional accuracy has been found. Furthermore, grain size evolution and recrystallization phenomena during manufacturing process has been numerically investigated using a semi empirical Johnson-Mehl-Avrami-Kohnogorov (JMAK) model. The volume subtractions have been simulated by boolean trimming: a one step and a multi step analysis have been performed. The multi-step procedure has allowed to choose the best material removal sequence in order to reduce machining distortions.
Radial-rotation profile forming: A new processing technology of incremental sheet metal forming
NASA Astrophysics Data System (ADS)
Laue, Robert; Härtel, Sebastian; Awiszus, Birgit
2018-05-01
Incremental forming processes (i.e., spinning) of sheet metal blanks into cylindrical cups are suitable for lower lot sizes. The produced cups were frequently used as preforms to produce workpieces in further forming steps with additional functions like profiled hollow parts [1]. The incremental forming process radial-rotation profile forming has been developed to enable the production of profiled hollow parts with low sheet thinning and good geometrical accuracy. The two principal forming steps are the production of the preform by rotational swing-folding [2] and the subsequent radial profiling of the hollow part in one clamping position. The rotational swing-folding process is based on a combination of conventional spinning and swing-folding. Therefore, a round blank rotates on a profiled mandrel and due to the swinging of a cylindrical forming tool, the blank is formed to a cup with low sheet thinning. In addition, thickening results at the edge of the blank and wrinkling occurs. However, the wrinkles are formed into the indentation of the profiled mandrel and can be reshaped as an advantage in the second process step, the radial profiling. Due to the rotation and continuous radial feed of a profiled forming tool to the profiled mandrel, the axial profile is formed in the second process step. Because of the minor relative movement in axial direction between tool and blank, low sheet thinning occurs. This is an advantage of the principle of the process.
Carbothermal Reduction of Quartz with Carbon from Natural Gas
NASA Astrophysics Data System (ADS)
Li, Fei; Tangstad, Merete
2017-04-01
Carbothermal reaction between quartz and two different carbons originating from natural gas were investigated in this paper. One of two carbons is the commercial carbon black produced from natural gas in a medium thermal production process. The other carbon is obtained from natural gas cracking at 1273 K (1000 °C) deposited directly on the quartz pellet. At the 1923 K (1650 °C) and CO atmosphere, the impact of carbon content, pellet structure, gas transfer, and heating rate are investigated in a thermo-gravimetric furnace. The reaction process can be divided into two steps: an initial SiC-producing step followed by a SiO-producing step. Higher carbon content and increased gas transfer improves the reaction rate of SiC-producing step, while the thicker carbon coating in carbon-deposited pellet hinders reaction rate. Better gas transfer of sample holder improves reaction rate but causes more SiO loss. Heating rate has almost no influence on reaction. Mass balance analysis shows that mole ratios between SiO2, free carbon, and SiC in the SiC-producing step and SiO-producing step in CO and Ar fit the reaction SiO2(s) + 3 C(s) = SiC(s) + 2 CO(g). SiC-particle and SiC-coating formation process in mixed pellet and carbon-deposited pellet are proposed. SiC whiskers formed in the voids of these two types of pellets.
Stepped fans and facies-equivalent phyllosilicates in Coprates Catena, Mars
NASA Astrophysics Data System (ADS)
Grindrod, P. M.; Warner, N. H.; Hobley, D. E. J.; Schwartz, C.; Gupta, S.
2018-06-01
Stepped fan deposits and phyllosilicate mineralogies are relatively common features on Mars but have not previously been found in association with each other. Both of these features are widely accepted to be the result of aqueous processes, but the assumed role and nature of any water varies. In this study we have investigated two stepped fan deposits in Coprates Catena, Mars, which have a genetic link to light-toned material that is rich in Fe-Mg phyllosilicate phases. Although of different sizes and in separate, but adjacent, trough-like depressions, we identify similar features at these stepped fans and phyllosilicates that are indicative of similar formation conditions and processes. Our observations of the overall geomorphology, mineralogy and chronology of these features are consistent with a two stage formation process, whereby deposition in the troughs first occurs into shallow standing water or playas, forming fluvial or alluvial fans that terminate in delta deposits and interfinger with interpreted lacustrine facies, with a later period of deposition under sub-aerial conditions, forming alluvial fan deposits. We suggest that the distinctive stepped appearance of these fans is the result of aeolian erosion, and is not a primary depositional feature. This combined formation framework for stepped fans and phyllosilicates can also explain other similar features on Mars, and adds to the growing evidence of fluvial activity in the equatorial region of Mars during the Hesperian and Amazonian.
On-site manufacture of propellant oxygen from lunar resources
NASA Technical Reports Server (NTRS)
Rosenberg, Sanders D.
1992-01-01
The Aerojet Carbothermal Process for the manufacture of oxygen from lunar resources has three essential steps: the reduction of silicate with methane to form carbon monoxide and hydrogen; the reduction of carbon monoxide with hydrogen to form methane and water; and the electrolysis of water to form oxygen and hydrogen. This cyclic process does not depend upon the presence of water or water precursors in the lunar materials; it will produce oxygen from silicates regardless of their precise composition and fine structure. Research on the first step of the process was initiated by determining some of the operating conditions required to reduce igneous rock with carbon and silicon carbide. The initial phase of research on the second step is completed; quantitative conversion of carbon monoxide and hydrogen to methane and water was achieved with a nickel-on-kieselguhr catalyst. The equipment used in and the results obtained from these process studies are reported in detail.
Rotor assembly and method for automatically processing liquids
Burtis, C.A.; Johnson, W.F.; Walker, W.A.
1992-12-22
A rotor assembly is described for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water. It includes a rotor body for rotation about an axis and includes a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses. 34 figs.
Method and apparatus for fault tolerance
NASA Technical Reports Server (NTRS)
Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)
1993-01-01
A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.
Review of Manganese Processing for Production of TRIP/TWIP Steels, Part 2: Reduction Studies
NASA Astrophysics Data System (ADS)
Elliott, R.; Coley, K.; Mostaghel, S.; Barati, M.
2018-02-01
Production of ultrahigh-manganese steels is expected to result in significant increase in demand for low-carbon (LC) ferromanganese (FeMn) and silicomanganese (SiMn). Current manganese processing techniques are energy intensive and typically yield a high-carbon product. The present work therefore reviews available literature regarding carbothermic reduction of Mn oxides and ores, with the objective of identifying opportunities for future process development to mitigate the cost of LC FeMn and SiMn. In general, there is consensus that carbothermic reduction of Mn oxides and ores is limited by gasification of carbon. Conditions which enhance or bypass this step (e.g., by application of CH4) show higher rates of reduction at lower temperatures. This phenomenon has potential application in solid-state reduction of Mn ore. Other avenues for process development include optimization of the prereduction step in conventional FeMn production and metallothermic reduction as a secondary reduction step.
Zang, Yuguo; Kammerer, Bernd; Eisenkolb, Maike; Lohr, Katrin; Kiefer, Hans
2011-01-01
Crystallization conditions of an intact monoclonal IgG4 (immunoglobulin G, subclass 4) antibody were established in vapor diffusion mode by sparse matrix screening and subsequent optimization. The procedure was transferred to microbatch conditions and a phase diagram was built showing surprisingly low solubility of the antibody at equilibrium. With up-scaling to process scale in mind, purification efficiency of the crystallization step was investigated. Added model protein contaminants were excluded from the crystals to more than 95%. No measurable loss of Fc-binding activity was observed in the crystallized and redissolved antibody. Conditions could be adapted to crystallize the antibody directly from concentrated and diafiltrated cell culture supernatant, showing purification efficiency similar to that of Protein A chromatography. We conclude that crystallization has the potential to be included in downstream processing as a low-cost purification or formulation step. PMID:21966480
Process for conversion of lignin to reformulated hydrocarbon gasoline
Shabtai, Joseph S.; Zmierczak, Wlodzimierz W.; Chornet, Esteban
1999-09-28
A process for converting lignin into high-quality reformulated hydrocarbon gasoline compositions in high yields is disclosed. The process is a two-stage, catalytic reaction process that produces a reformulated hydrocarbon gasoline product with a controlled amount of aromatics. In the first stage, a lignin material is subjected to a base-catalyzed depolymerization reaction in the presence of a supercritical alcohol as a reaction medium, to thereby produce a depolymerized lignin product. In the second stage, the depolymerized lignin product is subjected to a sequential two-step hydroprocessing reaction to produce a reformulated hydrocarbon gasoline product. In the first hydroprocessing step, the depolymerized lignin is contacted with a hydrodeoxygenation catalyst to produce a hydrodeoxygenated intermediate product. In the second hydroprocessing step, the hydrodeoxygenated intermediate product is contacted with a hydrocracking/ring hydrogenation catalyst to produce the reformulated hydrocarbon gasoline product which includes various desirable naphthenic and paraffinic compounds.
Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics.
Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone
2016-10-05
Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics.
Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics
Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone
2016-01-01
Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics. PMID:27703141
Syntactic and semantic restrictions on morphological recomposition: MEG evidence from Greek.
Neophytou, K; Manouilidou, C; Stockall, L; Marantz, A
2018-05-16
Complex morphological processing has been extensively studied in the past decades. However, most of this work has either focused on only certain steps involved in this process, or it has been conducted on a few languages, like English. The purpose of the present study is to investigate the spatiotemporal cortical processing profile of the distinct steps previously reported in the literature, from decomposition to re-composition of morphologically complex items, in a relatively understudied language, Greek. Using magnetoencephalography, we confirm the role of the fusiform gyrus in early, form-based morphological decomposition, we relate the syntactic licensing of stem-suffix combinations to the ventral visual processing stream, somewhat independent from lexical access for the stem, and we further elucidate the role of orbitofrontal regions in semantic composition. Thus, the current study offers the most comprehensive test to date of visual morphological processing and additional, crosslinguistic validation of the steps involved in it. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
MIRADS-2 Implementation Manual
NASA Technical Reports Server (NTRS)
1975-01-01
The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.
MOCAT: A Metagenomics Assembly and Gene Prediction Toolkit
Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R.; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/. PMID:23082188
Gea, Saharman; Reynolds, Christopher T; Roohpour, Nima; Wirjosentono, Basuki; Soykeabkaew, Nattakan; Bilotti, Emiliano; Peijs, Ton
2011-10-01
Bacterial cellulose (BC) is a natural hydrogel, which is produced by Acetobacter xylinum (recently renamed Gluconacetobacter xylinum) in culture and constitutes of a three-dimensional network of ribbon-shaped bundles of cellulose microfibrils. Here, a two-step purification process is presented that significantly improves the structural, mechanical, thermal and morphological behaviour of BC sheet processed from these hydrogels produced in static culture. Alkalisation of BC using a single-step treatment of 2.5 wt.% NaOH solution produced a twofold increase in Young's modulus of processed BC sheet over untreated BC sheet. Further enhancements are achieved after a second treatment with 2.5 wt.% NaOCl (bleaching). These treatments were carefully designed in order to prevent any polymorphic crystal transformation from cellulose I to cellulose II, which can be detrimental for the mechanical properties. Scanning electron microscopy and thermogravimetric analysis reveals that with increasing chemical treatment, morphological and thermal stability of the processed films are also improved. Copyright © 2011 Elsevier Ltd. All rights reserved.
MOCAT: a metagenomics assembly and gene prediction toolkit.
Kultima, Jens Roat; Sunagawa, Shinichi; Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.
A mechanism for leader stepping
NASA Astrophysics Data System (ADS)
Ebert, U.; Carlson, B. E.; Koehn, C.
2013-12-01
The stepping of negative leaders is well observed, but not well understood. A major problem consists of the fact that the streamer corona is typically invisible within a thunderstorm, but determines the evolution of a leader. Motivated by recent observations of streamer and leader formation in the laboratory by T.M.P. Briels, S. Nijdam, P. Kochkin, A.P.J. van Deursen et al., by recent simulations of these processes by J. Teunissen, A. Sun et al., and by our theoretical understanding of the process, we suggest how laboratory phenomena can be extrapolated to lightning leaders to explain the stepping mechanism.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Simulation of Unique Pressure Changing Steps and Situations in Psa Processes
NASA Technical Reports Server (NTRS)
Ebner, Armin D.; Mehrotra, Amal; Knox, James C.; LeVan, Douglas; Ritter, James A.
2007-01-01
A more rigorous cyclic adsorption process simulator is being developed for use in the development and understanding of new and existing PSA processes. Unique features of this new version of the simulator that Ritter and co-workers have been developing for the past decade or so include: multiple absorbent layers in each bed, pressure drop in the column, valves for entering and exiting flows and predicting real-time pressurization and depressurization rates, ability to account for choked flow conditions, ability to pressurize and depressurize simultaneously from both ends of the columns, ability to equalize between multiple pairs of columns, ability to equalize simultaneously from both ends of pairs of columns, and ability to handle very large pressure ratios and hence velocities associated with deep vacuum systems. These changes to the simulator now provide for unique opportunities to study the effects of novel pressure changing steps and extreme process conditions on the performance of virtually any commercial or developmental PSA process. This presentation will provide an overview of the cyclic adsorption process simulator equations and algorithms used in the new adaptation. It will focus primarily on the novel pressure changing steps and their effects on the performance of a PSA system that epitomizes the extremes of PSA process design and operation. This PSA process is a sorbent-based atmosphere revitalization (SBAR) system that NASA is developing for new manned exploration vehicles. This SBAR system consists of a 2-bed 3-step 3-layer system that operates between atmospheric pressure and the vacuum of space, evacuates from both ends of the column simultaneously, experiences choked flow conditions during pressure changing steps, and experiences a continuously changing feed composition, as it removes metabolic CO2 and H20 from a closed and fixed volume, i.e., the spacecraft cabin. Important process performance indicators of this SBAR system are size, and the corresponding CO2 and H20 removal efficiencies, and N2 and O2 loss rates. Results of the fundamental behavior of this PSA process during extreme operating conditions will be presented and discussed.