Automated procedures for sizing aerospace vehicle structures /SAVES/
NASA Technical Reports Server (NTRS)
Giles, G. L.; Blackburn, C. L.; Dixon, S. C.
1972-01-01
Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.
Automated protein NMR structure determination using wavelet de-noised NOESY spectra.
Dancea, Felician; Günther, Ulrich
2005-11-01
A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking.
The 3D Euler solutions using automated Cartesian grid generation
NASA Technical Reports Server (NTRS)
Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.
1993-01-01
Viewgraphs on 3-dimensional Euler solutions using automated Cartesian grid generation are presented. Topics covered include: computational fluid dynamics (CFD) and the design cycle; Cartesian grid strategy; structured body fit; grid generation; prolate spheroid; and ONERA M6 wing.
Automated generation of weld path trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sizemore, John M.; Hinman-Sweeney, Elaine Marie; Ames, Arlo Leroy
2003-06-01
AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most shipmore » structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Shipbuilding Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.« less
Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design
NASA Technical Reports Server (NTRS)
Li, Wu; Robinson, Jay
2016-01-01
This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.
Automatic structured grid generation using Gridgen (some restrictions apply)
NASA Technical Reports Server (NTRS)
Chawner, John R.; Steinbrenner, John P.
1995-01-01
The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.
A continuously growing web-based interface structure databank
NASA Astrophysics Data System (ADS)
Erwin, N. A.; Wang, E. I.; Osysko, A.; Warner, D. H.
2012-07-01
The macroscopic properties of materials can be significantly influenced by the presence of microscopic interfaces. The complexity of these interfaces coupled with the vast configurational space in which they reside has been a long-standing obstacle to the advancement of true bottom-up material behavior predictions. In this vein, atomistic simulations have proven to be a valuable tool for investigating interface behavior. However, before atomistic simulations can be utilized to model interface behavior, meaningful interface atomic structures must be generated. The generation of structures has historically been carried out disjointly by individual research groups, and thus, has constituted an overlap in effort across the broad research community. To address this overlap and to lower the barrier for new researchers to explore interface modeling, we introduce a web-based interface structure databank (www.isdb.cee.cornell.edu) where users can search, download and share interface structures. The databank is intended to grow via two mechanisms: (1) interface structure donations from individual research groups and (2) an automated structure generation algorithm which continuously creates equilibrium interface structures. In this paper, we describe the databank, the automated interface generation algorithm, and compare a subset of the autonomously generated structures to structures currently available in the literature. To date, the automated generation algorithm has been directed toward aluminum grain boundary structures, which can be compared with experimentally measured population densities of aluminum polycrystals.
Automation of NMR structure determination of proteins.
Altieri, Amanda S; Byrd, R Andrew
2004-10-01
The automation of protein structure determination using NMR is coming of age. The tedious processes of resonance assignment, followed by assignment of NOE (nuclear Overhauser enhancement) interactions (now intertwined with structure calculation), assembly of input files for structure calculation, intermediate analyses of incorrect assignments and bad input data, and finally structure validation are all being automated with sophisticated software tools. The robustness of the different approaches continues to deal with problems of completeness and uniqueness; nevertheless, the future is very bright for automation of NMR structure generation to approach the levels found in X-ray crystallography. Currently, near completely automated structure determination is possible for small proteins, and the prospect for medium-sized and large proteins is good. Copyright 2004 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven
2014-08-01
Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.
Lagorce, David; Pencheva, Tania; Villoutreix, Bruno O; Miteva, Maria A
2009-11-13
Discovery of new bioactive molecules that could enter drug discovery programs or that could serve as chemical probes is a very complex and costly endeavor. Structure-based and ligand-based in silico screening approaches are nowadays extensively used to complement experimental screening approaches in order to increase the effectiveness of the process and facilitating the screening of thousands or millions of small molecules against a biomolecular target. Both in silico screening methods require as input a suitable chemical compound collection and most often the 3D structure of the small molecules has to be generated since compounds are usually delivered in 1D SMILES, CANSMILES or in 2D SDF formats. Here, we describe the new open source program DG-AMMOS which allows the generation of the 3D conformation of small molecules using Distance Geometry and their energy minimization via Automated Molecular Mechanics Optimization. The program is validated on the Astex dataset, the ChemBridge Diversity database and on a number of small molecules with known crystal structures extracted from the Cambridge Structural Database. A comparison with the free program Balloon and the well-known commercial program Omega generating the 3D of small molecules is carried out. The results show that the new free program DG-AMMOS is a very efficient 3D structure generator engine. DG-AMMOS provides fast, automated and reliable access to the generation of 3D conformation of small molecules and facilitates the preparation of a compound collection prior to high-throughput virtual screening computations. The validation of DG-AMMOS on several different datasets proves that generated structures are generally of equal quality or sometimes better than structures obtained by other tested methods.
NASA Astrophysics Data System (ADS)
Theveneau, P.; Baker, R.; Barrett, R.; Beteva, A.; Bowler, M. W.; Carpentier, P.; Caserotto, H.; de Sanctis, D.; Dobias, F.; Flot, D.; Guijarro, M.; Giraud, T.; Lentini, M.; Leonard, G. A.; Mattenet, M.; McCarthy, A. A.; McSweeney, S. M.; Morawe, C.; Nanao, M.; Nurizzo, D.; Ohlsson, S.; Pernot, P.; Popov, A. N.; Round, A.; Royant, A.; Schmid, W.; Snigirev, A.; Surr, J.; Mueller-Dieckmann, C.
2013-03-01
Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This "first generation" of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.
Automated building of organometallic complexes from 3D fragments.
Foscato, Marco; Venkatraman, Vishwesh; Occhipinti, Giovanni; Alsberg, Bjørn K; Jensen, Vidar R
2014-07-28
A method for the automated construction of three-dimensional (3D) molecular models of organometallic species in design studies is described. Molecular structure fragments derived from crystallographic structures and accurate molecular-level calculations are used as 3D building blocks in the construction of multiple molecular models of analogous compounds. The method allows for precise control of stereochemistry and geometrical features that may otherwise be very challenging, or even impossible, to achieve with commonly available generators of 3D chemical structures. The new method was tested in the construction of three sets of active or metastable organometallic species of catalytic reactions in the homogeneous phase. The performance of the method was compared with those of commonly available methods for automated generation of 3D models, demonstrating higher accuracy of the prepared 3D models in general, and, in particular, a much wider range with respect to the kind of chemical structures that can be built automatically, with capabilities far beyond standard organic and main-group chemistry.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
Automated crystallographic system for high-throughput protein structure determination.
Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F
2003-07-01
High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.
Loft: An Automated Mesh Generator for Stiffened Shell Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
2011-01-01
Loft is an automated mesh generation code that is designed for aerospace vehicle structures. From user input, Loft generates meshes for wings, noses, tanks, fuselage sections, thrust structures, and so on. As a mesh is generated, each element is assigned properties to mark the part of the vehicle with which it is associated. This property assignment is an extremely powerful feature that enables detailed analysis tasks, such as load application and structural sizing. This report is presented in two parts. The first part is an overview of the code and its applications. The modeling approach that was used to create the finite element meshes is described. Several applications of the code are demonstrated, including a Next Generation Launch Technology (NGLT) wing-sizing study, a lunar lander stage study, a launch vehicle shroud shape study, and a two-stage-to-orbit (TSTO) orbiter. Part two of the report is the program user manual. The manual includes in-depth tutorials and a complete command reference.
Shingrani, Rahul; Krenz, Gary; Molthen, Robert
2010-01-01
With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.
pmx: Automated protein structure and topology generation for alchemical perturbations
Gapsys, Vytautas; Michielssens, Servaas; Seeliger, Daniel; de Groot, Bert L
2015-01-01
Computational protein design requires methods to accurately estimate free energy changes in protein stability or binding upon an amino acid mutation. From the different approaches available, molecular dynamics-based alchemical free energy calculations are unique in their accuracy and solid theoretical basis. The challenge in using these methods lies in the need to generate hybrid structures and topologies representing two physical states of a system. A custom made hybrid topology may prove useful for a particular mutation of interest, however, a high throughput mutation analysis calls for a more general approach. In this work, we present an automated procedure to generate hybrid structures and topologies for the amino acid mutations in all commonly used force fields. The described software is compatible with the Gromacs simulation package. The mutation libraries are readily supported for five force fields, namely Amber99SB, Amber99SB*-ILDN, OPLS-AA/L, Charmm22*, and Charmm36. PMID:25487359
An ultraviolet-visible spectrophotometer automation system. Part 3: Program documentation
NASA Astrophysics Data System (ADS)
Roth, G. S.; Teuschler, J. M.; Budde, W. L.
1982-07-01
The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to process manually entered data for the analysis of chlorophyll or color. For each program of the UVVIS system, this document contains a program description, flowchart, variable dictionary, code listing, and symbol cross-reference table. Also included are descriptions of file structures and of routines common to all automated analyses. The programs are written in Data General extended BASIC, Revision 4.3, under the RDOS operating systems, Revision 6.2. The BASIC code has been enhanced for real-time data acquisition, which is accomplished by CALLS to assembly language subroutines. Two other related publications are 'An Ultraviolet-Visible Spectrophotometer Automation System - Part I Functional Specifications,' and 'An Ultraviolet-Visible Spectrophotometer Automation System - Part II User's Guide.'
Automated pulmonary lobar ventilation measurements using volume-matched thoracic CT and MRI
NASA Astrophysics Data System (ADS)
Guo, F.; Svenningsen, S.; Bluemke, E.; Rajchl, M.; Yuan, J.; Fenster, A.; Parraga, G.
2015-03-01
Objectives: To develop and evaluate an automated registration and segmentation pipeline for regional lobar pulmonary structure-function measurements, using volume-matched thoracic CT and MRI in order to guide therapy. Methods: Ten subjects underwent pulmonary function tests and volume-matched 1H and 3He MRI and thoracic CT during a single 2-hr visit. CT was registered to 1H MRI using an affine method that incorporated block-matching and this was followed by a deformable step using free-form deformation. The resultant deformation field was used to deform the associated CT lobe mask that was generated using commercial software. 3He-1H image registration used the same two-step registration method and 3He ventilation was segmented using hierarchical k-means clustering. Whole lung and lobar 3He ventilation and ventilation defect percent (VDP) were generated by mapping ventilation defects to CT-defined whole lung and lobe volumes. Target CT-3He registration accuracy was evaluated using region- , surface distance- and volume-based metrics. Automated whole lung and lobar VDP was compared with semi-automated and manual results using paired t-tests. Results: The proposed pipeline yielded regional spatial agreement of 88.0+/-0.9% and surface distance error of 3.9+/-0.5 mm. Automated and manual whole lung and lobar ventilation and VDP were not significantly different and they were significantly correlated (r = 0.77, p < 0.0001). Conclusion: The proposed automated pipeline can be used to generate regional pulmonary structural-functional maps with high accuracy and robustness, providing an important tool for image-guided pulmonary interventions.
On the virtues of automated quantitative structure-activity relationship: the new kid on the block.
de Oliveira, Marcelo T; Katekawa, Edson
2018-02-01
Quantitative structure-activity relationship (QSAR) has proved to be an invaluable tool in medicinal chemistry. Data availability at unprecedented levels through various databases have collaborated to a resurgence in the interest for QSAR. In this context, rapid generation of quality predictive models is highly desirable for hit identification and lead optimization. We showcase the application of an automated QSAR approach, which randomly selects multiple training/test sets and utilizes machine-learning algorithms to generate predictive models. Results demonstrate that AutoQSAR produces models of improved or similar quality to those generated by practitioners in the field but in just a fraction of the time. Despite the potential of the concept to the benefit of the community, the AutoQSAR opportunity has been largely undervalued.
Automated branching pattern report generation for laparoscopic surgery assistance
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Matsuzaki, Tetsuro; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku
2015-05-01
This paper presents a method for generating branching pattern reports of abdominal blood vessels for laparoscopic gastrectomy. In gastrectomy, it is very important to understand branching structure of abdominal arteries and veins, which feed and drain specific abdominal organs including the stomach, the liver and the pancreas. In the real clinical stage, a surgeon creates a diagnostic report of the patient anatomy. This report summarizes the branching patterns of the blood vessels related to the stomach. The surgeon decides actual operative procedure. This paper shows an automated method to generate a branching pattern report for abdominal blood vessels based on automated anatomical labeling. The report contains 3D rendering showing important blood vessels and descriptions of branching patterns of each vessel. We have applied this method for fifty cases of 3D abdominal CT scans and confirmed the proposed method can automatically generate branching pattern reports of abdominal arteries.
Automated peak picking and peak integration in macromolecular NMR spectra using AUTOPSY.
Koradi, R; Billeter, M; Engeli, M; Güntert, P; Wüthrich, K
1998-12-01
A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automated peak picking for NMR spectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking. Copyright 1998 Academic Press.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O'Connor, Mary; Shapiro, Bruce A
2008-10-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O’Connor, Mary; Shapiro, Bruce A.
2013-01-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes. PMID:18838281
Grid generation on trimmed Bezier and NURBS quilted surfaces
NASA Technical Reports Server (NTRS)
Woan, Chung-Jin; Clever, Willard C.; Tam, Clement K.
1995-01-01
This paper presents some recently added capabilities to RAGGS, Rockwell Automated Grid Generation System. Included are the trimmed surface handling and display capability and structures and unstructured grid generation on trimmed Bezier and NURBS (non-uniform rational B-spline surfaces) quilted surfaces. Samples are given to demonstrate the new capabilities.
Practical computational toolkits for dendrimers and dendrons structure design.
Martinho, Nuno; Silva, Liana C; Florindo, Helena F; Brocchini, Steve; Barata, Teresa; Zloh, Mire
2017-09-01
Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.
Practical computational toolkits for dendrimers and dendrons structure design
NASA Astrophysics Data System (ADS)
Martinho, Nuno; Silva, Liana C.; Florindo, Helena F.; Brocchini, Steve; Barata, Teresa; Zloh, Mire
2017-09-01
Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.
Xu, Wei
2007-12-01
This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
The purpose of this paper is to discuss grid generation issues and to challenge the grid generation community to develop tools suitable for automated multidisciplinary analysis and design optimization of aerospace vehicles. Special attention is given to the grid generation issues of computational fluid dynamics and computational structural mechanics disciplines.
Microreactor Cells for High-Throughput X-ray Absorption Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beesley, Angela; Tsapatsaris, Nikolaos; Weiher, Norbert
2007-01-19
High-throughput experimentation has been applied to X-ray Absorption spectroscopy as a novel route for increasing research productivity in the catalysis community. Suitable instrumentation has been developed for the rapid determination of the local structure in the metal component of precursors for supported catalysts. An automated analytical workflow was implemented that is much faster than traditional individual spectrum analysis. It allows the generation of structural data in quasi-real time. We describe initial results obtained from the automated high throughput (HT) data reduction and analysis of a sample library implemented through the 96 well-plate industrial standard. The results show that a fullymore » automated HT-XAS technology based on existing industry standards is feasible and useful for the rapid elucidation of geometric and electronic structure of materials.« less
Mission Critical Computer Resources Management Guide
1988-09-01
Support Analyzers, Management, Generators Environments Word Workbench Processors Showroom System Structure HO Compilers IMath 1OperatingI Functions I...Simulated Automated, On-Line Generators Support Exercises Catalog, Function Environments Formal Spec Libraries Showroom System Structure I ADA Trackers I...shown in Figure 13-2. In this model, showrooms of larger more capable piecesare developed off-line for later integration and use in multiple systems
LYRA, a webserver for lymphocyte receptor structural modeling.
Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter; Nielsen, Morten; Marcatili, Paolo
2015-07-01
The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA is based on the canonical structure method, that in the last 30 years has been successfully used to generate antibody models of high accuracy, and in our benchmarks this approach proves to achieve similarly good results on TCR modeling, with a benchmarked average RMSD accuracy of 1.29 and 1.48 Å for B- and T-cell receptors, respectively. To the best of our knowledge, LYRA is the first automated server for the prediction of TCR structure. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Neural networks for structural design - An integrated system implementation
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han
1992-01-01
The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.
NASA Technical Reports Server (NTRS)
Thompson David S.; Soni, Bharat K.
2001-01-01
An integrated geometry/grid/simulation software package, ICEG2D, is being developed to automate computational fluid dynamics (CFD) simulations for single- and multi-element airfoils with ice accretions. The current version, ICEG213 (v2.0), was designed to automatically perform four primary functions: (1) generate a grid-ready surface definition based on the geometrical characteristics of the iced airfoil surface, (2) generate high-quality structured and generalized grids starting from a defined surface definition, (3) generate the input and restart files needed to run the structured grid CFD solver NPARC or the generalized grid CFD solver HYBFL2D, and (4) using the flow solutions, generate solution-adaptive grids. ICEG2D (v2.0) can be operated in either a batch mode using a script file or in an interactive mode by entering directives from a command line within a Unix shell. This report summarizes activities completed in the first two years of a three-year research and development program to address automation issues related to CFD simulations for airfoils with ice accretions. As well as describing the technology employed in the software, this document serves as a users manual providing installation and operating instructions. An evaluation of the software is also presented.
Werner, Michael; Kuratli, Christoph; Martin, Rainer E; Hochstrasser, Remo; Wechsler, David; Enderle, Thilo; Alanine, Alexander I; Vogel, Horst
2014-02-03
Drug discovery is a multifaceted endeavor encompassing as its core element the generation of structure-activity relationship (SAR) data by repeated chemical synthesis and biological testing of tailored molecules. Herein, we report on the development of a flow-based biochemical assay and its seamless integration into a fully automated system comprising flow chemical synthesis, purification and in-line quantification of compound concentration. This novel synthesis-screening platform enables to obtain SAR data on b-secretase (BACE1) inhibitors at an unprecedented cycle time of only 1 h instead of several days. Full integration and automation of industrial processes have always led to productivity gains and cost reductions, and this work demonstrates how applying these concepts to SAR generation may lead to a more efficient drug discovery process. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Motion generation of robotic surgical tasks: learning from expert demonstrations.
Reiley, Carol E; Plaku, Erion; Hager, Gregory D
2010-01-01
Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This paper proposes using programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical's da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts' underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results suggest that this paper presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smoother trajectories for reproduction of three common medical tasks.
Automated Conflict Resolution For Air Traffic Control
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2005-01-01
The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.
The Electrolyte Genome project: A big data approach in battery materials discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xiaohui; Jain, Anubhav; Rajput, Nav Nidhi
2015-06-01
We present a high-throughput infrastructure for the automated calculation of molecular properties with a focus on battery electrolytes. The infrastructure is largely open-source and handles both practical aspects (input file generation, output file parsing, and information management) as well as more complex problems (structure matching, salt complex generation, and failure recovery). Using this infrastructure, we have computed the ionization potential (IP) and electron affinities (EA) of 4830 molecules relevant to battery electrolytes (encompassing almost 55,000 quantum mechanics calculations) at the B3LYP/6-31+G(*) level. We describe automated workflows for computing redox potential, dissociation constant, and salt-molecule binding complex structure generation. We presentmore » routines for automatic recovery from calculation errors, which brings the failure rate from 9.2% to 0.8% for the QChem DFT code. Automated algorithms to check duplication between two arbitrary molecules and structures are described. We present benchmark data on basis sets and functionals on the G2-97 test set; one finding is that a IP/EA calculation method that combines PBE geometry optimization and B3LYP energy evaluation requires less computational cost and yields nearly identical results as compared to a full B3LYP calculation, and could be suitable for the calculation of large molecules. Our data indicates that among the 8 functionals tested, XYGJ-OS and B3LYP are the two best functionals to predict IP/EA with an RMSE of 0.12 and 0.27 eV, respectively. Application of our automated workflow to a large set of quinoxaline derivative molecules shows that functional group effect and substitution position effect can be separated for IP/EA of quinoxaline derivatives, and the most sensitive position is different for IP and EA. Published by Elsevier B.V« less
Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid
2005-06-01
The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.
Dixon, Steven L; Duan, Jianxin; Smith, Ethan; Von Bargen, Christopher D; Sherman, Woody; Repasky, Matthew P
2016-10-01
We introduce AutoQSAR, an automated machine-learning application to build, validate and deploy quantitative structure-activity relationship (QSAR) models. The process of descriptor generation, feature selection and the creation of a large number of QSAR models has been automated into a single workflow within AutoQSAR. The models are built using a variety of machine-learning methods, and each model is scored using a novel approach. Effectiveness of the method is demonstrated through comparison with literature QSAR models using identical datasets for six end points: protein-ligand binding affinity, solubility, blood-brain barrier permeability, carcinogenicity, mutagenicity and bioaccumulation in fish. AutoQSAR demonstrates similar or better predictive performance as compared with published results for four of the six endpoints while requiring minimal human time and expertise.
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
NASA Astrophysics Data System (ADS)
Zhou, X.; Hayashi, T.; Han, M.; Chen, H.; Hara, T.; Fujita, H.; Yokoyama, R.; Kanematsu, M.; Hoshi, H.
2009-02-01
X-ray CT images have been widely used in clinical diagnosis in recent years. A modern CT scanner can generate about 1000 CT slices to show the details of all the human organs within 30 seconds. However, CT image interpretations (viewing 500-1000 slices of CT images manually in front of a screen or films for each patient) require a lot of time and energy. Therefore, computer-aided diagnosis (CAD) systems that can support CT image interpretations are strongly anticipated. Automated recognition of the anatomical structures in CT images is a basic pre-processing of the CAD system. The bone structure is a part of anatomical structures and very useful to act as the landmarks for predictions of the other different organ positions. However, the automated recognition of the bone structure is still a challenging issue. This research proposes an automated scheme for segmenting the bone regions and recognizing the bone structure in noncontrast torso CT images. The proposed scheme was applied to 48 torso CT cases and a subjective evaluation for the experimental results was carried out by an anatomical expert following the anatomical definition. The experimental results showed that the bone structure in 90% CT cases have been recognized correctly. For quantitative evaluation, automated recognition results were compared to manual inputs of bones of lower limb created by an anatomical expert on 10 randomly selected CT cases. The error (maximum distance in 3D) between the recognition results and manual inputs distributed from 3-8 mm in different parts of the bone regions.
Shen, Hong-Bin; Yi, Dong-Liang; Yao, Li-Xiu; Yang, Jie; Chou, Kuo-Chen
2008-10-01
In the postgenomic age, with the avalanche of protein sequences generated and relatively slow progress in determining their structures by experiments, it is important to develop automated methods to predict the structure of a protein from its sequence. The membrane proteins are a special group in the protein family that accounts for approximately 30% of all proteins; however, solved membrane protein structures only represent less than 1% of known protein structures to date. Although a great success has been achieved for developing computational intelligence techniques to predict secondary structures in both globular and membrane proteins, there is still much challenging work in this regard. In this review article, we firstly summarize the recent progress of automation methodology development in predicting protein secondary structures, especially in membrane proteins; we will then give some future directions in this research field.
Approaches to automated protein crystal harvesting
Deller, Marc C.; Rupp, Bernhard
2014-01-01
The harvesting of protein crystals is almost always a necessary step in the determination of a protein structure using X-ray crystallographic techniques. However, protein crystals are usually fragile and susceptible to damage during the harvesting process. For this reason, protein crystal harvesting is the single step that remains entirely dependent on skilled human intervention. Automation has been implemented in the majority of other stages of the structure-determination pipeline, including cloning, expression, purification, crystallization and data collection. The gap in automation between crystallization and data collection results in a bottleneck in throughput and presents unfortunate opportunities for crystal damage. Several automated protein crystal harvesting systems have been developed, including systems utilizing microcapillaries, microtools, microgrippers, acoustic droplet ejection and optical traps. However, these systems have yet to be commonly deployed in the majority of crystallography laboratories owing to a variety of technical and cost-related issues. Automation of protein crystal harvesting remains essential for harnessing the full benefits of fourth-generation synchrotrons, free-electron lasers and microfocus beamlines. Furthermore, automation of protein crystal harvesting offers several benefits when compared with traditional manual approaches, including the ability to harvest microcrystals, improved flash-cooling procedures and increased throughput. PMID:24637746
Using Generative Representations to Evolve Robots. Chapter 1
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2004-01-01
Recent research has demonstrated the ability of evolutionary algorithms to automatically design both the physical structure and software controller of real physical robots. One of the challenges for these automated design systems is to improve their ability to scale to the high complexities found in real-world problems. Here we claim that for automated design systems to scale in complexity they must use a representation which allows for the hierarchical creation and reuse of modules, which we call a generative representation. Not only is the ability to reuse modules necessary for functional scalability, but it is also valuable for improving efficiency in testing and construction. We then describe an evolutionary design system with a generative representation capable of hierarchical modularity and demonstrate it for the design of locomoting robots in simulation. Finally, results from our experiments show that evolution with our generative representation produces better robots than those evolved with a non-generative representation.
Strategies Toward Automation of Overset Structured Surface Grid Generation
NASA Technical Reports Server (NTRS)
Chan, William M.
2017-01-01
An outline of a strategy for automation of overset structured surface grid generation on complex geometries is described. The starting point of the process consists of an unstructured surface triangulation representation of the geometry derived from a native CAD, STEP, or IGES definition, and a set of discretized surface curves that captures all geometric features of interest. The procedure for surface grid generation is decomposed into an algebraic meshing step, a hyperbolic meshing step, and a gap-filling step. This paper will focus primarily on the high-level plan with details on the algebraic step. The algorithmic procedure for the algebraic step involves analyzing the topology of the network of surface curves, distributing grid points appropriately on these curves, identifying domains bounded by four curves that can be meshed algebraically, concatenating the resulting grids into fewer patches, and extending appropriate boundaries of the concatenated grids to provide proper overlap. Results are presented for grids created on various aerospace vehicle components.
Potrzebowski, Wojciech; André, Ingemar
2015-07-01
For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints.
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
Models and Systems for Structurization of Knowledge in Training
ERIC Educational Resources Information Center
Pelin, Nicolae; Pelin, Serghei
2007-01-01
In this work the problems of the automated structurization and activation of the knowledge, saved and used by mankind, during the organization and training, and also that knowledge which are generated by experts (including teachers) in the current activity, are analyzed. The purpose--the further perfection of methods and systems of the automated…
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
Zelesky, Veronica; Schneider, Richard; Janiszewski, John; Zamora, Ismael; Ferguson, James; Troutman, Matthew
2013-05-01
The ability to supplement high-throughput metabolic clearance data with structural information defining the site of metabolism should allow design teams to streamline their synthetic decisions. However, broad application of metabolite identification in early drug discovery has been limited, largely due to the time required for data review and structural assignment. The advent of mass defect filtering and its application toward metabolite scouting paved the way for the development of software automation tools capable of rapidly identifying drug-related material in complex biological matrices. Two semi-automated commercial software applications, MetabolitePilot™ and Mass-MetaSite™, were evaluated to assess the relative speed and accuracy of structural assignments using data generated on a high-resolution MS platform. Review of these applications has demonstrated their utility in providing accurate results in a time-efficient manner, leading to acceleration of metabolite identification initiatives while highlighting the continued need for biotransformation expertise in the interpretation of more complex metabolic reactions.
Kosinski, Jan; Gajda, Michal J; Cymerman, Iwona A; Kurowski, Michal A; Pawlowski, Marcin; Boniecki, Michal; Obarska, Agnieszka; Papaj, Grzegorz; Sroczynska-Obuchowicz, Paulina; Tkaczuk, Karolina L; Sniezynska, Paulina; Sasin, Joanna M; Augustyn, Anna; Bujnicki, Janusz M; Feder, Marcin
2005-01-01
In the course of CASP6, we generated models for all targets using a new version of the "FRankenstein's monster approach." Previously (in CASP5) we were able to build many very accurate full-atom models by selection and recombination of well-folded fragments obtained from crude fold recognition (FR) results, followed by optimization of the sequence-structure fit and assessment of alternative alignments on the structural level. This procedure was however very arduous, as most of the steps required extensive visual and manual input from the human modeler. Now, we have automated the most tedious steps, such as superposition of alternative models, extraction of best-scoring fragments, and construction of a hybrid "monster" structure, as well as generation of alternative alignments in the regions that remain poorly scored in the refined hybrid model. We have also included the ROSETTA method to construct those parts of the target for which no reasonable structures were generated by FR methods (such as long insertions and terminal extensions). The analysis of successes and failures of the current version of the FRankenstein approach in modeling of CASP6 targets reveals that the considerably streamlined and automated method performs almost as well as the initial, mostly manual version, which suggests that it may be a useful tool for accurate protein structure prediction even in the hands of nonexperts. 2005 Wiley-Liss, Inc.
Means of storage and automated monitoring of versions of text technical documentation
NASA Astrophysics Data System (ADS)
Leonovets, S. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The paper presents automation of the process of preparation, storage and monitoring of version control of a text designer, and program documentation by means of the specialized software is considered. Automation of preparation of documentation is based on processing of the engineering data which are contained in the specifications and technical documentation or in the specification. Data handling assumes existence of strictly structured electronic documents prepared in widespread formats according to templates on the basis of industry standards and generation by an automated method of the program or designer text document. Further life cycle of the document and engineering data entering it are controlled. At each stage of life cycle, archive data storage is carried out. Studies of high-speed performance of use of different widespread document formats in case of automated monitoring and storage are given. The new developed software and the work benches available to the developer of the instrumental equipment are described.
Open-Source Programming for Automated Generation of Graphene Raman Spectral Maps
NASA Astrophysics Data System (ADS)
Vendola, P.; Blades, M.; Pierre, W.; Jedlicka, S.; Rotkin, S. V.
Raman microscopy is a useful tool for studying the structural characteristics of graphene deposited onto substrates. However, extracting useful information from the Raman spectra requires data processing and 2D map generation. An existing home-built confocal Raman microscope was optimized for graphene samples and programmed to automatically generate Raman spectral maps across a specified area. In particular, an open source data collection scheme was generated to allow the efficient collection and analysis of the Raman spectral data for future use. NSF ECCS-1509786.
Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk
2016-01-01
To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial phase of structure-based studies. In this respect, the web server can be a valuable tool, as users are allowed to process multiple structures, quickly switch between results, and interact with generated visualizations in an intuitive manner. The SequenceCEROSENE web server is available at https://biosciences.hs-mittweida.de/seqcerosene.
NASA Astrophysics Data System (ADS)
Ashwood, Christopher; Lin, Chi-Hung; Thaysen-Andersen, Morten; Packer, Nicolle H.
2018-03-01
Profiling cellular protein glycosylation is challenging due to the presence of highly similar glycan structures that play diverse roles in cellular physiology. As the anomericity and the exact linkage type of a single glycosidic bond can influence glycan function, there is a demand for improved and automated methods to confirm detailed structural features and to discriminate between structurally similar isomers, overcoming a significant bottleneck in the analysis of data generated by glycomics experiments. We used porous graphitized carbon-LC-ESI-MS/MS to separate and detect released N- and O-glycan isomers from mammalian model glycoproteins using negative mode resonance activation CID-MS/MS. By interrogating similar fragment spectra from closely related glycan isomers that differ only in arm position and sialyl linkage, product fragment ions for discrimination between these features were discovered. Using the Skyline software, at least two diagnostic fragment ions of high specificity were validated for automated discrimination of sialylation and arm position in N-glycan structures, and sialylation in O-glycan structures, complementing existing structural diagnostic ions. These diagnostic ions were shown to be useful for isomer discrimination using both linear and 3D ion trap mass spectrometers when analyzing complex glycan mixtures from cell lysates. Skyline was found to serve as a useful tool for automated assessment of glycan isomer discrimination. This platform-independent workflow can potentially be extended to automate the characterization and quantitation of other challenging glycan isomers. [Figure not available: see fulltext.
The value proposition of structured reporting in interventional radiology.
Durack, Jeremy C
2014-10-01
The purposes of this article are to provide a brief overview of structured radiology reporting and to emphasize the anticipated benefits from a new generation of standardized interventional radiology procedure reports. Radiology reporting standards and tools have evolved to enable automated data integration from multiple institutions using structured templates. In interventional radiology, data aggregated into clinical, research and quality registries from enriched structured reports could firmly establish the interventional radiology value proposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H; Liang, X; Kalbasi, A
2014-06-01
Purpose: Advanced radiotherapy (RT) techniques such as proton pencil beam scanning (PBS) and photon-based volumetric modulated arc therapy (VMAT) have dosimetric advantages in the treatment of head and neck malignancies. However, anatomic or alignment changes during treatment may limit robustness of PBS and VMAT plans. We assess the feasibility of automated deformable registration tools for robustness evaluation in adaptive PBS and VMAT RT of oropharyngeal cancer (OPC). Methods: We treated 10 patients with bilateral OPC with advanced RT techniques and obtained verification CT scans with physician-reviewed target and OAR contours. We generated 3 advanced RT plans for each patient: protonmore » PBS plan using 2 posterior oblique fields (2F), proton PBS plan using an additional third low-anterior field (3F), and a photon VMAT plan using 2 arcs (Arc). For each of the planning techniques, we forward calculated initial (Ini) plans on the verification scans to create verification (V) plans. We extracted DVH indicators based on physician-generated contours for 2 target and 14 OAR structures to investigate the feasibility of two automated tools (contour propagation (CP) and dose deformation (DD)) as surrogates for routine clinical plan robustness evaluation. For each verification scan, we compared DVH indicators of V, CP and DD plans in a head-to-head fashion using Student's t-test. Results: We performed 39 verification scans; each patient underwent 3 to 6 verification scan. We found no differences in doses to target or OAR structures between V and CP, V and DD, and CP and DD plans across all patients (p > 0.05). Conclusions: Automated robustness evaluation tools, CP and DD, accurately predicted dose distributions of verification (V) plans using physician-generated contours. These tools may be further developed as a potential robustness screening tool in the workflow for adaptive treatment of OPC using advanced RT techniques, reducing the need for physician-generated contours.« less
Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V
2012-07-01
The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.
1984-01-01
The design experience associated with a benchmark aeroelastic design of an out of production transport aircraft is discussed. Current work being performed on a high aspect ratio wing design is reported. The Preliminary Aeroelastic Design of Structures (PADS) system is briefly summarized and some operational aspects of generating the design in an automated aeroelastic design environment are discussed.
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Advanced tow placement of composite fuselage structure
NASA Technical Reports Server (NTRS)
Anderson, Robert L.; Grant, Carroll G.
1992-01-01
The Hercules NASA ACT program was established to demonstrate and validate the low cost potential of the automated tow placement process for fabrication of aircraft primary structures. The program is currently being conducted as a cooperative program in collaboration with the Boeing ATCAS Program. The Hercules advanced tow placement process has been in development since 1982 and was developed specifically for composite aircraft structures. The second generation machine, now in operation at Hercules, is a production-ready machine that uses a low cost prepreg tow material form to produce structures with laminate properties equivalent to prepreg tape layup. Current program activities are focused on demonstration of the automated tow placement process for fabrication of subsonic transport aircraft fuselage crown quadrants. We are working with Boeing Commercial Aircraft and Douglas Aircraft during this phase of the program. The Douglas demonstration panels has co-cured skin/stringers, and the Boeing demonstration panel is an intricately bonded part with co-cured skin/stringers and co-bonded frames. Other aircraft structures that were evaluated for the automated tow placement process include engine nacelle components, fuselage pressure bulkheads, and fuselage tail cones. Because of the cylindrical shape of these structures, multiple parts can be fabricated on one two placement tool, thus reducing the cost per pound of the finished part.
Towards protein-crystal centering using second-harmonic generation (SHG) microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kissick, David J.; Dettmar, Christopher M.; Becker, Michael
2013-05-01
The potential of second-harmonic generation (SHG) microscopy for automated crystal centering to guide synchrotron X-ray diffraction of protein crystals has been explored. The potential of second-harmonic generation (SHG) microscopy for automated crystal centering to guide synchrotron X-ray diffraction of protein crystals was explored. These studies included (i) comparison of microcrystal positions in cryoloops as determined by SHG imaging and by X-ray diffraction rastering and (ii) X-ray structure determinations of selected proteins to investigate the potential for laser-induced damage from SHG imaging. In studies using β{sub 2} adrenergic receptor membrane-protein crystals prepared in lipidic mesophase, the crystal locations identified by SHGmore » images obtained in transmission mode were found to correlate well with the crystal locations identified by raster scanning using an X-ray minibeam. SHG imaging was found to provide about 2 µm spatial resolution and shorter image-acquisition times. The general insensitivity of SHG images to optical scatter enabled the reliable identification of microcrystals within opaque cryocooled lipidic mesophases that were not identified by conventional bright-field imaging. The potential impact of extended exposure of protein crystals to five times a typical imaging dose from an ultrafast laser source was also assessed. Measurements of myoglobin and thaumatin crystals resulted in no statistically significant differences between structures obtained from diffraction data acquired from exposed and unexposed regions of single crystals. Practical constraints for integrating SHG imaging into an active beamline for routine automated crystal centering are discussed.« less
E-novo: an automated workflow for efficient structure-based lead optimization.
Pearce, Bradley C; Langley, David R; Kang, Jia; Huang, Hongwei; Kulkarni, Amit
2009-07-01
An automated E-Novo protocol designed as a structure-based lead optimization tool was prepared through Pipeline Pilot with existing CHARMm components in Discovery Studio. A scaffold core having 3D binding coordinates of interest is generated from a ligand-bound protein structural model. Ligands of interest are generated from the scaffold using an R-group fragmentation/enumeration tool within E-Novo, with their cores aligned. The ligand side chains are conformationally sampled and are subjected to core-constrained protein docking, using a modified CHARMm-based CDOCKER method to generate top poses along with CDOCKER energies. In the final stage of E-Novo, a physics-based binding energy scoring function ranks the top ligand CDOCKER poses using a more accurate Molecular Mechanics-Generalized Born with Surface Area method. Correlation of the calculated ligand binding energies with experimental binding affinities were used to validate protocol performance. Inhibitors of Src tyrosine kinase, CDK2 kinase, beta-secretase, factor Xa, HIV protease, and thrombin were used to test the protocol using published ligand crystal structure data within reasonably defined binding sites. In-house Respiratory Syncytial Virus inhibitor data were used as a more challenging test set using a hand-built binding model. Least squares fits for all data sets suggested reasonable validation of the protocol within the context of observed ligand binding poses. The E-Novo protocol provides a convenient all-in-one structure-based design process for rapid assessment and scoring of lead optimization libraries.
Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk
2016-06-01
For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.
Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.
Zachariah, S G; Sanders, J E; Turkiyyah, G M
1996-06-01
A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.
Micro-total envelope system with silicon nanowire separator for safe carcinogenic chemistry.
Singh, Ajay K; Ko, Dong-Hyeon; Vishwakarma, Niraj K; Jang, Seungwook; Min, Kyoung-Ik; Kim, Dong-Pyo
2016-02-26
Exploration and expansion of the chemistries involving toxic or carcinogenic reagents are severely limited by the health hazards their presence poses. Here, we present a micro-total envelope system (μ-TES) and an automated total process for the generation of the carcinogenic reagent, its purification and its utilization for a desired synthesis that is totally enveloped from being exposed to the carcinogen. A unique microseparator is developed on the basis of SiNWs structure to replace the usual exposure-prone distillation in separating the generated reagent. Chloromethyl methyl ether chemistry is explored as a carcinogenic model in demonstrating the efficiency of the μ-TES that is fully automated so that feeding the ingredients for the generation is all it takes to produce the desired product. Syntheses taking days can be accomplished safely in minutes with excellent yields, which bodes well for elevating the carcinogenic chemistry to new unexplored dimensions.
Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)
NASA Technical Reports Server (NTRS)
Baroff, Lynn; Dischinger, Charlie; Fitts, David
2009-01-01
Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
2015-01-01
Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579
Towards Automated Structure-Based NMR Resonance Assignment
NASA Astrophysics Data System (ADS)
Jang, Richard; Gao, Xin; Li, Ming
We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.
ProDaMa: an open source Python library to generate protein structure datasets.
Armano, Giuliano; Manconi, Andrea
2009-10-02
The huge difference between the number of known sequences and known tertiary structures has justified the use of automated methods for protein analysis. Although a general methodology to solve these problems has not been yet devised, researchers are engaged in developing more accurate techniques and algorithms whose training plays a relevant role in determining their performance. From this perspective, particular importance is given to the training data used in experiments, and researchers are often engaged in the generation of specialized datasets that meet their requirements. To facilitate the task of generating specialized datasets we devised and implemented ProDaMa, an open source Python library than provides classes for retrieving, organizing, updating, analyzing, and filtering protein data. ProDaMa has been used to generate specialized datasets useful for secondary structure prediction and to develop a collaborative web application aimed at generating and sharing protein structure datasets. The library, the related database, and the documentation are freely available at the URL http://iasc.diee.unica.it/prodama.
BEACON: automated tool for Bacterial GEnome Annotation ComparisON.
Kalkatawi, Manal; Alam, Intikhab; Bajic, Vladimir B
2015-08-18
Genome annotation is one way of summarizing the existing knowledge about genomic characteristics of an organism. There has been an increased interest during the last several decades in computer-based structural and functional genome annotation. Many methods for this purpose have been developed for eukaryotes and prokaryotes. Our study focuses on comparison of functional annotations of prokaryotic genomes. To the best of our knowledge there is no fully automated system for detailed comparison of functional genome annotations generated by different annotation methods (AMs). The presence of many AMs and development of new ones introduce needs to: a/ compare different annotations for a single genome, and b/ generate annotation by combining individual ones. To address these issues we developed an Automated Tool for Bacterial GEnome Annotation ComparisON (BEACON) that benefits both AM developers and annotation analysers. BEACON provides detailed comparison of gene function annotations of prokaryotic genomes obtained by different AMs and generates extended annotations through combination of individual ones. For the illustration of BEACON's utility, we provide a comparison analysis of multiple different annotations generated for four genomes and show on these examples that the extended annotation can increase the number of genes annotated by putative functions up to 27%, while the number of genes without any function assignment is reduced. We developed BEACON, a fast tool for an automated and a systematic comparison of different annotations of single genomes. The extended annotation assigns putative functions to many genes with unknown functions. BEACON is available under GNU General Public License version 3.0 and is accessible at: http://www.cbrc.kaust.edu.sa/BEACON/ .
Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna
2015-12-01
First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered whilst performing this study highlight that more needs to be done to make such assessments easier. Above all, more adequate, preferably open access to relevant 'gold standards' is required.
Automated spectral classification and the GAIA project
NASA Technical Reports Server (NTRS)
Lasala, Jerry; Kurtz, Michael J.
1995-01-01
Two dimensional spectral types for each of the stars observed in the global astrometric interferometer for astrophysics (GAIA) mission would provide additional information for the galactic structure and stellar evolution studies, as well as helping in the identification of unusual objects and populations. The classification of the large quantity generated spectra requires that automated techniques are implemented. Approaches for the automatic classification are reviewed, and a metric-distance method is discussed. In tests, the metric-distance method produced spectral types with mean errors comparable to those of human classifiers working at similar resolution. Data and equipment requirements for an automated classification survey, are discussed. A program of auxiliary observations is proposed to yield spectral types and radial velocities for the GAIA-observed stars.
Iterative Repair Planning for Spacecraft Operations Using the Aspen System
NASA Technical Reports Server (NTRS)
Rabideau, G.; Knight, R.; Chien, S.; Fukunaga, A.; Govindjee, A.
2000-01-01
This paper describes the Automated Scheduling and Planning Environment (ASPEN). ASPEN encodes complex spacecraft knowledge of operability constraints, flight rules, spacecraft hardware, science experiments and operations procedures to allow for automated generation of low level spacecraft sequences. Using a technique called iterative repair, ASPEN classifies constraint violations (i.e., conflicts) and attempts to repair each by performing a planning or scheduling operation. It must reason about which conflict to resolve first and what repair method to try for the given conflict. ASPEN is currently being utilized in the development of automated planner/scheduler systems for several spacecraft, including the UFO-1 naval communications satellite and the Citizen Explorer (CX1) satellite, as well as for planetary rover operations and antenna ground systems automation. This paper focuses on the algorithm and search strategies employed by ASPEN to resolve spacecraft operations constraints, as well as the data structures for representing these constraints.
Protein Structure and Function Prediction Using I-TASSER
Yang, Jianyi; Zhang, Yang
2016-01-01
I-TASSER is a hierarchical protocol for automated protein structure prediction and structure-based function annotation. Starting from the amino acid sequence of target proteins, I-TASSER first generates full-length atomic structural models from multiple threading alignments and iterative structural assembly simulations followed by atomic-level structure refinement. The biological functions of the protein, including ligand-binding sites, enzyme commission number, and gene ontology terms, are then inferred from known protein function databases based on sequence and structure profile comparisons. I-TASSER is freely available as both an on-line server and a stand-alone package. This unit describes how to use the I-TASSER protocol to generate structure and function prediction and how to interpret the prediction results, as well as alternative approaches for further improving the I-TASSER modeling quality for distant-homologous and multi-domain protein targets. PMID:26678386
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
Lee, Woonghee; Kim, Jin Hae; Westler, William M; Markley, John L
2011-06-15
PONDEROSA (Peak-picking Of Noe Data Enabled by Restriction of Shift Assignments) accepts input information consisting of a protein sequence, backbone and sidechain NMR resonance assignments, and 3D-NOESY ((13)C-edited and/or (15)N-edited) spectra, and returns assignments of NOESY crosspeaks, distance and angle constraints, and a reliable NMR structure represented by a family of conformers. PONDEROSA incorporates and integrates external software packages (TALOS+, STRIDE and CYANA) to carry out different steps in the structure determination. PONDEROSA implements internal functions that identify and validate NOESY peak assignments and assess the quality of the calculated three-dimensional structure of the protein. The robustness of the analysis results from PONDEROSA's hierarchical processing steps that involve iterative interaction among the internal and external modules. PONDEROSA supports a variety of input formats: SPARKY assignment table (.shifts) and spectrum file formats (.ucsf), XEASY proton file format (.prot), and NMR-STAR format (.star). To demonstrate the utility of PONDEROSA, we used the package to determine 3D structures of two proteins: human ubiquitin and Escherichia coli iron-sulfur scaffold protein variant IscU(D39A). The automatically generated structural constraints and ensembles of conformers were as good as or better than those determined previously by much less automated means. The program, in the form of binary code along with tutorials and reference manuals, is available at http://ponderosa.nmrfam.wisc.edu/.
Tchoua, Roselyne B; Qin, Jian; Audus, Debra J; Chard, Kyle; Foster, Ian T; de Pablo, Juan
2016-09-13
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature; yet, while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our work is whether, and to what extent, the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction, while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semi-automated creation of a thermodynamic property database.
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
NASA Astrophysics Data System (ADS)
Zhou, Yuhong; Klages, Peter; Tan, Jun; Chi, Yujie; Stojadinovic, Strahinja; Yang, Ming; Hrycushko, Brian; Medin, Paul; Pompos, Arnold; Jiang, Steve; Albuquerque, Kevin; Jia, Xun
2017-06-01
High dose rate (HDR) brachytherapy treatment planning is conventionally performed manually and/or with aids of preplanned templates. In general, the standard of care would be elevated by conducting an automated process to improve treatment planning efficiency, eliminate human error, and reduce plan quality variations. Thus, our group is developing AutoBrachy, an automated HDR brachytherapy planning suite of modules used to augment a clinical treatment planning system. This paper describes our proof-of-concept module for vaginal cylinder HDR planning that has been fully developed. After a patient CT scan is acquired, the cylinder applicator is automatically segmented using image-processing techniques. The target CTV is generated based on physician-specified treatment depth and length. Locations of the dose calculation point, apex point and vaginal surface point, as well as the central applicator channel coordinates, and the corresponding dwell positions are determined according to their geometric relationship with the applicator and written to a structure file. Dwell times are computed through iterative quadratic optimization techniques. The planning information is then transferred to the treatment planning system through a DICOM-RT interface. The entire process was tested for nine patients. The AutoBrachy cylindrical applicator module was able to generate treatment plans for these cases with clinical grade quality. Computation times varied between 1 and 3 min on an Intel Xeon CPU E3-1226 v3 processor. All geometric components in the automated treatment plans were generated accurately. The applicator channel tip positions agreed with the manually identified positions with submillimeter deviations and the channel orientations between the plans agreed within less than 1 degree. The automatically generated plans obtained clinically acceptable quality.
A Computational Framework for Automation of Point Defect Calculations
NASA Astrophysics Data System (ADS)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration
A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.
Automated Run-Time Mission and Dialog Generation
2007-03-01
Processing, Social Network Analysis, Simulation, Automated Scenario Generation 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified...9 D. SOCIAL NETWORKS...13 B. MISSION AND DIALOG GENERATION.................................................13 C. SOCIAL NETWORKS
Generative Representations for Automated Design of Robots
NASA Technical Reports Server (NTRS)
Homby, Gregory S.; Lipson, Hod; Pollack, Jordan B.
2007-01-01
A method of automated design of complex, modular robots involves an evolutionary process in which generative representations of designs are used. The term generative representations as used here signifies, loosely, representations that consist of or include algorithms, computer programs, and the like, wherein encoded designs can reuse elements of their encoding and thereby evolve toward greater complexity. Automated design of robots through synthetic evolutionary processes has already been demonstrated, but it is not clear whether genetically inspired search algorithms can yield designs that are sufficiently complex for practical engineering. The ultimate success of such algorithms as tools for automation of design depends on the scaling properties of representations of designs. A nongenerative representation (one in which each element of the encoded design is used at most once in translating to the design) scales linearly with the number of elements. Search algorithms that use nongenerative representations quickly become intractable (search times vary approximately exponentially with numbers of design elements), and thus are not amenable to scaling to complex designs. Generative representations are compact representations and were devised as means to circumvent the above-mentioned fundamental restriction on scalability. In the present method, a robot is defined by a compact programmatic form (its generative representation) and the evolutionary variation takes place on this form. The evolutionary process is an iterative one, wherein each cycle consists of the following steps: 1. Generative representations are generated in an evolutionary subprocess. 2. Each generative representation is a program that, when compiled, produces an assembly procedure. 3. In a computational simulation, a constructor executes an assembly procedure to generate a robot. 4. A physical-simulation program tests the performance of a simulated constructed robot, evaluating the performance according to a fitness criterion to yield a figure of merit that is fed back into the evolutionary subprocess of the next iteration. In comparison with prior approaches to automated evolutionary design of robots, the use of generative representations offers two advantages: First, a generative representation enables the reuse of components in regular and hierarchical ways and thereby serves a systematic means of creating more complex modules out of simpler ones. Second, the evolved generative representation may capture intrinsic properties of the design problem, so that variations in the representations move through the design space more effectively than do equivalent variations in a nongenerative representation. This method has been demonstrated by using it to design some robots that move, variously, by walking, rolling, or sliding. Some of the robots were built (see figure). Although these robots are very simple, in comparison with robots designed by humans, their structures are more regular, modular, hierarchical, and complex than are those of evolved designs of comparable functionality synthesized by use of nongenerative representations.
Automated lattice data generation
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.
2018-03-01
The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.
Designing Anticancer Peptides by Constructive Machine Learning.
Grisoni, Francesca; Neuhaus, Claudia S; Gabernet, Gisela; Müller, Alex T; Hiss, Jan A; Schneider, Gisbert
2018-04-21
Constructive (generative) machine learning enables the automated generation of novel chemical structures without the need for explicit molecular design rules. This study presents the experimental application of such a deep machine learning model to design membranolytic anticancer peptides (ACPs) de novo. A recurrent neural network with long short-term memory cells was trained on α-helical cationic amphipathic peptide sequences and then fine-tuned with 26 known ACPs by transfer learning. This optimized model was used to generate unique and novel amino acid sequences. Twelve of the peptides were synthesized and tested for their activity on MCF7 human breast adenocarcinoma cells and selectivity against human erythrocytes. Ten of these peptides were active against cancer cells. Six of the active peptides killed MCF7 cancer cells without affecting human erythrocytes with at least threefold selectivity. These results advocate constructive machine learning for the automated design of peptides with desired biological activities. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
WhoKnows? Evaluating Linked Data Heuristics with a Quiz that Cleans up DBpedia
ERIC Educational Resources Information Center
Waitelonis, Jorg; Ludwig, Nadine; Knuth, Magnus; Sack, Harald
2011-01-01
Purpose: Linking Open Data (LOD) provides a vast amount of well structured semantic information, but many inconsistencies may occur, especially if the data are generated with the help of automated methods. Data cleansing approaches enable detection of inconsistencies and overhauling of affected data sets, but they are difficult to apply…
Automated Sequence Generation Process and Software
NASA Technical Reports Server (NTRS)
Gladden, Roy
2007-01-01
"Automated sequence generation" (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences.
Development Status: Automation Advanced Development Space Station Freedom Electric Power System
NASA Technical Reports Server (NTRS)
Dolce, James L.; Kish, James A.; Mellor, Pamela A.
1990-01-01
Electric power system automation for Space Station Freedom is intended to operate in a loop. Data from the power system is used for diagnosis and security analysis to generate Operations Management System (OMS) requests, which are sent to an arbiter, which sends a plan to a commander generator connected to the electric power system. This viewgraph presentation profiles automation software for diagnosis, scheduling, and constraint interfaces, and simulation to support automation development. The automation development process is diagrammed, and the process of creating Ada and ART versions of the automation software is described.
Automated knowledge generation
NASA Technical Reports Server (NTRS)
Myler, Harley R.; Gonzalez, Avelino J.
1988-01-01
The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).
NASA Astrophysics Data System (ADS)
Illing, Gerd; Saenger, Wolfram; Heinemann, Udo
2000-06-01
The Protein Structure Factory will be established to characterize proteins encoded by human genes or cDNAs, which will be selected by criteria of potential structural novelty or medical or biotechnological usefulness. It represents an integrative approach to structure analysis combining bioinformatics techniques, automated gene expression and purification of gene products, generation of a biophysical fingerprint of the proteins and the determination of their three-dimensional structures either by NMR spectroscopy or by X-ray diffraction. The use of synchrotron radiation will be crucial to the Protein Structure Factory: high brilliance and tunable wavelengths are prerequisites for fast data collection, the use of small crystals and multiwavelength anomalous diffraction (MAD) phasing. With the opening of BESSY II, direct access to a third-generation XUV storage ring source with excellent conditions is available nearby. An insertion device with two MAD beamlines and one constant energy station will be set up until 2001.
Classification of Automated Search Traffic
NASA Astrophysics Data System (ADS)
Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.
As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.
Creation of structured documentation templates using Natural Language Processing techniques.
Kashyap, Vipul; Turchin, Alexander; Morin, Laura; Chang, Frank; Li, Qi; Hongsermeier, Tonya
2006-01-01
Structured Clinical Documentation is a fundamental component of the healthcare enterprise, linking both clinical (e.g., electronic health record, clinical decision support) and administrative functions (e.g., evaluation and management coding, billing). One of the challenges in creating good quality documentation templates has been the inability to address specialized clinical disciplines and adapt to local clinical practices. A one-size-fits-all approach leads to poor adoption and inefficiencies in the documentation process. On the other hand, the cost associated with manual generation of documentation templates is significant. Consequently there is a need for at least partial automation of the template generation process. We propose an approach and methodology for the creation of structured documentation templates for diabetes using Natural Language Processing (NLP).
Bayesian automated cortical segmentation for neonatal MRI
NASA Astrophysics Data System (ADS)
Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha
2017-11-01
Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.
Automated Test Case Generation for an Autopilot Requirement Prototype
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael
2011-01-01
Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Joshua M.
Manufacturing tasks that are deemed too hazardous for workers require the use of automation, robotics, and/or other remote handling tools. The associated hazards may be radiological or nonradiological, and based on the characteristics of the environment and processing, a design may necessitate robotic labor, human labor, or both. There are also other factors such as cost, ergonomics, maintenance, and efficiency that also effect task allocation and other design choices. Handling the tradeoffs of these factors can be complex, and lack of experience can be an issue when trying to determine if and what feasible automation/robotics options exist. To address thismore » problem, we utilize common engineering design approaches adapted more for manufacturing system design in hazardous environments. We limit our scope to the conceptual and embodiment design stages, specifically a computational algorithm for concept generation and early design evaluation. In regard to concept generation, we first develop the functional model or function structure for the process, using the common 'verb-noun' format for describing function. A common language or functional basis for manufacturing was developed and utilized to formalize function descriptions and guide rules for function decomposition. Potential components for embodiment are also grouped in terms of this functional language and are stored in a database. The properties of each component are given as quantitative and qualitative criteria. Operators are also rated for task-relevant criteria which are used to address task compatibility. Through the gathering of process requirements/constraints, construction of the component database, and development of the manufacturing basis and rule set, design knowledge is stored and available for computer use. Thus, once the higher level process functions are defined, the computer can automate the synthesis of new design concepts through alternating steps of embodiment and function structure updates/decomposition. In the process, criteria guide function allocation of components/operators and help ensure compatibility and feasibility. Through multiple function assignment options and varied function structures, multiple design concepts are created. All of the generated designs are then evaluated based on a number of relevant evaluation criteria: cost, dose, ergonomics, hazards, efficiency, etc. These criteria are computed using physical properties/parameters of each system based on the qualities an engineer would use to make evaluations. Nuclear processes such as oxide conversion and electrorefining are utilized to aid algorithm development and provide test cases for the completed program. Through our approach, we capture design knowledge related to manufacturing and other operations in hazardous environments to enable a computational program to automatically generate and evaluate system design concepts.« less
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
PICKY: a novel SVD-based NMR spectra peak picking method.
Alipanahi, Babak; Gao, Xin; Karakoc, Emre; Donaldson, Logan; Li, Ming
2009-06-15
Picking peaks from experimental NMR spectra is a key unsolved problem for automated NMR protein structure determination. Such a process is a prerequisite for resonance assignment, nuclear overhauser enhancement (NOE) distance restraint assignment, and structure calculation tasks. Manual or semi-automatic peak picking, which is currently the prominent way used in NMR labs, is tedious, time consuming and costly. We introduce new ideas, including noise-level estimation, component forming and sub-division, singular value decomposition (SVD)-based peak picking and peak pruning and refinement. PICKY is developed as an automated peak picking method. Different from the previous research on peak picking, we provide a systematic study of the proposed method. PICKY is tested on 32 real 2D and 3D spectra of eight target proteins, and achieves an average of 88% recall and 74% precision. PICKY is efficient. It takes PICKY on average 15.7 s to process an NMR spectrum. More important than these numbers, PICKY actually works in practice. We feed peak lists generated by PICKY to IPASS for resonance assignment, feed IPASS assignment to SPARTA for fragments generation, and feed SPARTA fragments to FALCON for structure calculation. This results in high-resolution structures of several proteins, for example, TM1112, at 1.25 A. PICKY is available upon request. The peak lists of PICKY can be easily loaded by SPARKY to enable a better interactive strategy for rapid peak picking.
Kotai Antibody Builder: automated high-resolution structural modeling of antibodies.
Yamashita, Kazuo; Ikeda, Kazuyoshi; Amada, Karlou; Liang, Shide; Tsuchiya, Yuko; Nakamura, Haruki; Shirai, Hiroki; Standley, Daron M
2014-11-15
Kotai Antibody Builder is a Web service for tertiary structural modeling of antibody variable regions. It consists of three main steps: hybrid template selection by sequence alignment and canonical rules, 3D rendering of alignments and CDR-H3 loop modeling. For the last step, in addition to rule-based heuristics used to build the initial model, a refinement option is available that uses fragment assembly followed by knowledge-based scoring. Using targets from the Second Antibody Modeling Assessment, we demonstrate that Kotai Antibody Builder generates models with an overall accuracy equal to that of the best-performing semi-automated predictors using expert knowledge. Kotai Antibody Builder is available at http://kotaiab.org standley@ifrec.osaka-u.ac.jp. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
AUTOMATED LITERATURE PROCESSING HANDLING AND ANALYSIS SYSTEM--FIRST GENERATION.
ERIC Educational Resources Information Center
Redstone Scientific Information Center, Redstone Arsenal, AL.
THE REPORT PRESENTS A SUMMARY OF THE DEVELOPMENT AND THE CHARACTERISTICS OF THE FIRST GENERATION OF THE AUTOMATED LITERATURE PROCESSING, HANDLING AND ANALYSIS (ALPHA-1) SYSTEM. DESCRIPTIONS OF THE COMPUTER TECHNOLOGY OF ALPHA-1 AND THE USE OF THIS AUTOMATED LIBRARY TECHNIQUE ARE PRESENTED. EACH OF THE SUBSYSTEMS AND MODULES NOW IN OPERATION ARE…
Automation of checkout for the shuttle operations era
NASA Technical Reports Server (NTRS)
Anderson, J. A.; Hendrickson, K. O.
1985-01-01
The Space Shuttle checkout is different from its Apollo predecessor. The complexity of the hardware, the shortened turnaround time, and the software that performs ground checkout are outlined. Generating new techniques and standards for software development and the management structure to control it are implemented. The utilization of computer systems for vehicle testing is high lighted.
Automating the Generation of Heterogeneous Aviation Safety Cases
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Pai, Ganesh J.; Pohl, Josef M.
2012-01-01
A safety case is a structured argument, supported by a body of evidence, which provides a convincing and valid justification that a system is acceptably safe for a given application in a given operating environment. This report describes the development of a fragment of a preliminary safety case for the Swift Unmanned Aircraft System. The construction of the safety case fragment consists of two parts: a manually constructed system-level case, and an automatically constructed lower-level case, generated from formal proof of safety-relevant correctness properties. We provide a detailed discussion of the safety considerations for the target system, emphasizing the heterogeneity of sources of safety-relevant information, and use a hazard analysis to derive safety requirements, including formal requirements. We evaluate the safety case using three classes of metrics for measuring degrees of coverage, automation, and understandability. We then present our preliminary conclusions and make suggestions for future work.
Chen, Yi- Ping Phoebe; Hanan, Jim
2002-01-01
Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly.
Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J.; Paulsen, Jane S.; Miller, Michael I.
2018-01-01
In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans. PMID:29867332
Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J; Paulsen, Jane S; Miller, Michael I
2018-01-01
In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans.
NASA Astrophysics Data System (ADS)
Shimchuk, G.; Shimchuk, Gr; Pakhomov, G.; Avalishvili, G.; Zavrazhnov, G.; Polonsky-Byslaev, I.; Fedotov, A.; Polozov, P.
2017-01-01
One of the prospective directions of PET development is using generator positron radiating nuclides [1,2]. Introduction of this technology is financially promising, since it does not require expensive special accelerator and radiochemical laboratory in the medical institution, which considerably reduces costs of PET diagnostics and makes it available to more patients. POZITOM-PRO RPC LLC developed and produced an 82Sr-82Rb generator, an automated injection system, designed for automatic and fully-controlled injections of 82RbCl produced by this generator, automated radiopharmaceutical synthesis units based on generated 68Ga produced using a domestically-manufactured 68Ge-68Ga generator for preparing two pharmaceuticals: Ga-68-DOTA-TATE and Vascular Ga-68.
Cerebellum engages in automation of verb-generation skill.
Yang, Zhi; Wu, Paula; Weng, Xuchu; Bandettini, Peter A
2014-03-01
Numerous studies have shown cerebellar involvement in item-specific association, a form of explicit learning. However, very few have demonstrated cerebellar participation in automation of non-motor cognitive tasks. Applying fMRI to a repeated verb-generation task, we sought to distinguish cerebellar involvement in learning of item-specific noun-verb association and automation of verb generation skill. The same set of nouns was repeated in six verb-generation blocks so that subjects practiced generating verbs for the nouns. The practice was followed by a novel block with a different set of nouns. The cerebellar vermis (IV/V) and the right cerebellar lobule VI showed decreased activation following practice; activation in the right cerebellar Crus I was significantly lower in the novel challenge than in the initial verb-generation task. Furthermore, activation in this region during well-practiced blocks strongly correlated with improvement of behavioral performance in both the well-practiced and the novel blocks, suggesting its role in the learning of general mental skills not specific to the practiced noun-verb pairs. Therefore, the cerebellum processes both explicit verbal associative learning and automation of cognitive tasks. Different cerebellar regions predominate in this processing: lobule VI during the acquisition of item-specific association, and Crus I during automation of verb-generation skills through practice.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
"First generation" automated DNA sequencing technology.
Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M
2011-10-01
Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.
Krauser, Joel; Walles, Markus; Wolf, Thierry; Graf, Daniel; Swart, Piet
2012-01-01
Generation and interpretation of biotransformation data on drugs, i.e. identification of physiologically relevant metabolites, defining metabolic pathways and elucidation of metabolite structures, have become increasingly important to the drug development process. Profiling using 14C or 3H radiolabel is defined as the chromatographic separation and quantification of drug-related material in a given biological sample derived from an in vitro, preclinical in vivo or clinical study. Metabolite profiling is a very time intensive activity, particularly for preclinical in vivo or clinical studies which have defined limitations on radiation burden and exposure levels. A clear gap exists for certain studies which do not require specialized high volume automation technologies, yet these studies would still clearly benefit from automation. Use of radiolabeled compounds in preclinical and clinical ADME studies, specifically for metabolite profiling and identification are a very good example. The current lack of automation for measuring low level radioactivity in metabolite profiling requires substantial capacity, personal attention and resources from laboratory scientists. To help address these challenges and improve efficiency, we have innovated, developed and implemented a novel and flexible automation platform that integrates a robotic plate handling platform, HPLC or UPLC system, mass spectrometer and an automated fraction collector. PMID:22723932
Park, Clara; Gruber-Rouh, Tatjana; Leithner, Doris; Zierden, Amelie; Albrecht, Mortiz H; Wichmann, Julian L; Bodelle, Boris; Elsabaie, Mohamed; Scholtz, Jan-Erik; Kaup, Moritz; Vogl, Thomas J; Beeres, Martin
2016-10-10
Evaluation of latest generation automated attenuation-based tube potential selection (ATPS) impact on image quality and radiation dose in contrast-enhanced chest-abdomen-pelvis computed tomography examinations for gynaecologic cancer staging. This IRB approved single-centre, observer-blinded retrospective study with a waiver for informed consent included a total of 100 patients with contrast-enhanced chest-abdomen-pelvis CT for gynaecologic cancer staging. All patients were examined with activated ATPS for adaption of tube voltage to body habitus. 50 patients were scanned on a third-generation dual-source CT (DSCT), and another 50 patients on a second-generation DSCT. Predefined image quality setting remained stable between both groups at 120 kV and a current of 210 Reference mAs. Subjective image quality assessment was performed by two blinded readers independently. Attenuation and image noise were measured in several anatomic structures. Signal-to-noise ratio (SNR) was calculated. For the evaluation of radiation exposure, CT dose index (CTDI vol ) values were compared. Diagnostic image quality was obtained in all patients. The median CTDI vol (6.1 mGy, range 3.9-22 mGy) was 40 % lower when using the algorithm compared with the previous ATCM protocol (median 10.2 mGy · cm, range 5.8-22.8 mGy). A reduction in potential to 90 kV occurred in 19 cases, a reduction to 100 kV in 23 patients and a reduction to 110 kV in 3 patients of our experimental cohort. These patients received significantly lower radiation exposure compared to the former used protocol. Latest generation automated ATPS on third-generation DSCT provides good diagnostic image quality in chest-abdomen-pelvis CT while average radiation dose is reduced by 40 % compared to former ATPS protocol on second-generation DSCT.
Automated design evolution of stereochemically randomized protein foldamers
NASA Astrophysics Data System (ADS)
Ranbhor, Ranjit; Kumar, Anil; Patel, Kirti; Ramakrishnan, Vibin; Durani, Susheel
2018-05-01
Diversification of chain stereochemistry opens up the possibilities of an ‘in principle’ increase in the design space of proteins. This huge increase in the sequence and consequent structural variation is aimed at the generation of smart materials. To diversify protein structure stereochemically, we introduced L- and D-α-amino acids as the design alphabet. With a sequence design algorithm, we explored the usage of specific variables such as chirality and the sequence of this alphabet in independent steps. With molecular dynamics, we folded stereochemically diverse homopolypeptides and evaluated their ‘fitness’ for possible design as protein-like foldamers. We propose a fitness function to prune the most optimal fold among 1000 structures simulated with an automated repetitive simulated annealing molecular dynamics (AR-SAMD) approach. The highly scored poly-leucine fold with sequence lengths of 24 and 30 amino acids were later sequence-optimized using a Dead End Elimination cum Monte Carlo based optimization tool. This paper demonstrates a novel approach for the de novo design of protein-like foldamers.
Automated software system for checking the structure and format of ACM SIG documents
NASA Astrophysics Data System (ADS)
Mirza, Arsalan Rahman; Sah, Melike
2017-04-01
Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.
Enabling Rapid and Robust Structural Analysis During Conceptual Design
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu
2015-01-01
This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.
Investigating the impact of automated feedback on students' scientific argumentation
NASA Astrophysics Data System (ADS)
Zhu, Mengxiao; Lee, Hee-Sun; Wang, Ting; Liu, Ou Lydia; Belur, Vinetha; Pallant, Amy
2017-08-01
This study investigates the role of automated scoring and feedback in supporting students' construction of written scientific arguments while learning about factors that affect climate change in the classroom. The automated scoring and feedback technology was integrated into an online module. Students' written scientific argumentation occurred when they responded to structured argumentation prompts. After submitting the open-ended responses, students received scores generated by a scoring engine and written feedback associated with the scores in real-time. Using the log data that recorded argumentation scores as well as argument submission and revisions activities, we answer three research questions. First, how students behaved after receiving the feedback; second, whether and how students' revisions improved their argumentation scores; and third, did item difficulties shift with the availability of the automated feedback. Results showed that the majority of students (77%) made revisions after receiving the feedback, and students with higher initial scores were more likely to revise their responses. Students who revised had significantly higher final scores than those who did not, and each revision was associated with an average increase of 0.55 on the final scores. Analysis on item difficulty shifts showed that written scientific argumentation became easier after students used the automated feedback.
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Data-Driven Learning of Total and Local Energies in Elemental Boron
NASA Astrophysics Data System (ADS)
Deringer, Volker L.; Pickard, Chris J.; Csányi, Gábor
2018-04-01
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β -rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Data-Driven Learning of Total and Local Energies in Elemental Boron.
Deringer, Volker L; Pickard, Chris J; Csányi, Gábor
2018-04-13
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β-rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Automated design of degenerate codon libraries.
Mena, Marco A; Daugherty, Patrick S
2005-12-01
Degenerate codon libraries are frequently used in protein engineering and evolution studies but are often limited to targeting a small number of positions to adequately limit the search space. To mitigate this, codon degeneracy can be limited using heuristics or previous knowledge of the targeted positions. To automate design of libraries given a set of amino acid sequences, an algorithm (LibDesign) was developed that generates a set of possible degenerate codon libraries, their resulting size, and their score relative to a user-defined scoring function. A gene library of a specified size can then be constructed that is representative of the given amino acid distribution or that includes specific sequences or combinations thereof. LibDesign provides a new tool for automated design of high-quality protein libraries that more effectively harness existing sequence-structure information derived from multiple sequence alignment or computational protein design data.
Automated installation methods for photovoltaic arrays
NASA Astrophysics Data System (ADS)
Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.
1982-11-01
Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Information entropy of humpback whale songs.
Suzuki, Ryuji; Buck, John R; Tyack, Peter L
2006-03-01
The structure of humpback whale (Megaptera novaeangliae) songs was examined using information theory techniques. The song is an ordered sequence of individual sound elements separated by gaps of silence. Song samples were converted into sequences of discrete symbols by both human and automated classifiers. This paper analyzes the song structure in these symbol sequences using information entropy estimators and autocorrelation estimators. Both parametric and nonparametric entropy estimators are applied to the symbol sequences representing the songs. The results provide quantitative evidence consistent with the hierarchical structure proposed for these songs by Payne and McVay [Science 173, 587-597 (1971)]. Specifically, this analysis demonstrates that: (1) There is a strong structural constraint, or syntax, in the generation of the songs, and (2) the structural constraints exhibit periodicities with periods of 6-8 and 180-400 units. This implies that no empirical Markov model is capable of representing the songs' structure. The results are robust to the choice of either human or automated song-to-symbol classifiers. In addition, the entropy estimates indicate that the maximum amount of information that could be communicated by the sequence of sounds made is less than 1 bit per second.
NASA Technical Reports Server (NTRS)
Gladden, Roy
2007-01-01
Version 2.0 of the autogen software has been released. "Autogen" (automated sequence generation) signifies both a process and software used to implement the process of automated generation of sequences of commands in a standard format for uplink to spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes.
Automated de novo phasing and model building of coiled-coil proteins.
Rämisch, Sebastian; Lizatović, Robert; André, Ingemar
2015-03-01
Models generated by de novo structure prediction can be very useful starting points for molecular replacement for systems where suitable structural homologues cannot be readily identified. Protein-protein complexes and de novo-designed proteins are examples of systems that can be challenging to phase. In this study, the potential of de novo models of protein complexes for use as starting points for molecular replacement is investigated. The approach is demonstrated using homomeric coiled-coil proteins, which are excellent model systems for oligomeric systems. Despite the stereotypical fold of coiled coils, initial phase estimation can be difficult and many structures have to be solved with experimental phasing. A method was developed for automatic structure determination of homomeric coiled coils from X-ray diffraction data. In a benchmark set of 24 coiled coils, ranging from dimers to pentamers with resolutions down to 2.5 Å, 22 systems were automatically solved, 11 of which had previously been solved by experimental phasing. The generated models contained 71-103% of the residues present in the deposited structures, had the correct sequence and had free R values that deviated on average by 0.01 from those of the respective reference structures. The electron-density maps were of sufficient quality that only minor manual editing was necessary to produce final structures. The method, named CCsolve, combines methods for de novo structure prediction, initial phase estimation and automated model building into one pipeline. CCsolve is robust against errors in the initial models and can readily be modified to make use of alternative crystallographic software. The results demonstrate the feasibility of de novo phasing of protein-protein complexes, an approach that could also be employed for other small systems beyond coiled coils.
Lee, Woonghee; Kim, Jin Hae; Westler, William M.; Markley, John L.
2011-01-01
Summary: PONDEROSA (Peak-picking Of Noe Data Enabled by Restriction of Shift Assignments) accepts input information consisting of a protein sequence, backbone and sidechain NMR resonance assignments, and 3D-NOESY (13C-edited and/or 15N-edited) spectra, and returns assignments of NOESY crosspeaks, distance and angle constraints, and a reliable NMR structure represented by a family of conformers. PONDEROSA incorporates and integrates external software packages (TALOS+, STRIDE and CYANA) to carry out different steps in the structure determination. PONDEROSA implements internal functions that identify and validate NOESY peak assignments and assess the quality of the calculated three-dimensional structure of the protein. The robustness of the analysis results from PONDEROSA's hierarchical processing steps that involve iterative interaction among the internal and external modules. PONDEROSA supports a variety of input formats: SPARKY assignment table (.shifts) and spectrum file formats (.ucsf), XEASY proton file format (.prot), and NMR-STAR format (.star). To demonstrate the utility of PONDEROSA, we used the package to determine 3D structures of two proteins: human ubiquitin and Escherichia coli iron-sulfur scaffold protein variant IscU(D39A). The automatically generated structural constraints and ensembles of conformers were as good as or better than those determined previously by much less automated means. Availability: The program, in the form of binary code along with tutorials and reference manuals, is available at http://ponderosa.nmrfam.wisc.edu/. Contact: whlee@nmrfam.wisc.edu; markley@nmrfam.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21511715
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
Automatic Generation of Test Oracles - From Pilot Studies to Application
NASA Technical Reports Server (NTRS)
Feather, Martin S.; Smith, Ben
1998-01-01
There is a trend towards the increased use of automation in V&V. Automation can yield savings in time and effort. For critical systems, where thorough V&V is required, these savings can be substantial. We describe a progression from pilot studies to development and use of V&V automation. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V. These studies culminated in the development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing an Al planning system that is a key component of an autonomous spacecraft.
Automation of large scale transient protein expression in mammalian cells
Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.
2011-01-01
Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074
The Choice between MapMan and Gene Ontology for Automated Gene Function Prediction in Plant Science
Klie, Sebastian; Nikoloski, Zoran
2012-01-01
Since the introduction of the Gene Ontology (GO), the analysis of high-throughput data has become tightly coupled with the use of ontologies to establish associations between knowledge and data in an automated fashion. Ontologies provide a systematic description of knowledge by a controlled vocabulary of defined structure in which ontological concepts are connected by pre-defined relationships. In plant science, MapMan and GO offer two alternatives for ontology-driven analyses. Unlike GO, initially developed to characterize microbial systems, MapMan was specifically designed to cover plant-specific pathways and processes. While the dependencies between concepts in MapMan are modeled as a tree, in GO these are captured in a directed acyclic graph. Therefore, the difference in ontologies may cause discrepancies in data reduction, visualization, and hypothesis generation. Here provide the first systematic comparative analysis of GO and MapMan for the case of the model plant species Arabidopsis thaliana (Arabidopsis) with respect to their structural properties and difference in distributions of information content. In addition, we investigate the effect of the two ontologies on the specificity and sensitivity of automated gene function prediction via the coupling of co-expression networks and the guilt-by-association principle. Automated gene function prediction is particularly needed for the model plant Arabidopsis in which only half of genes have been functionally annotated based on sequence similarity to known genes. The results highlight the need for structured representation of species-specific biological knowledge, and warrants caution in the design principles employed in future ontologies. PMID:22754563
Web-Based Collaborative Publications System: R&Tserve
NASA Technical Reports Server (NTRS)
Abrams, Steve
1997-01-01
R&Tserve is a publications system based on 'commercial, off-the-shelf' (COTS) software that provides a persistent, collaborative workspace for authors and editors to support the entire publication development process from initial submission, through iterative editing in a hierarchical approval structure, and on to 'publication' on the WWW. It requires no specific knowledge of the WWW (beyond basic use) or HyperText Markup Language (HTML). Graphics and URLs are automatically supported. The system includes a transaction archive, a comments utility, help functionality, automated graphics conversion, automated table generation, and an email-based notification system. It may be configured and administered via the WWW and can support publications ranging from single page documents to multiple-volume 'tomes'.
Optimization of Composite Structures with Curved Fiber Trajectories
NASA Astrophysics Data System (ADS)
Lemaire, Etienne; Zein, Samih; Bruyneel, Michael
2014-06-01
This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.
Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)
NASA Astrophysics Data System (ADS)
Habiballa, Hashim; Jendryscik, Radek
2017-11-01
The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
Generating Code Review Documentation for Auto-Generated Mission-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2009-01-01
Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.
MSTB 2 x 6-Inch Low Speed Tunnel Turbulence Generator Grid/Honeycomb PIV Measurements and Analysis
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
An assessment of the turbulence levels present in the Measurement Science and Technology (MSTB) branch's 2 x 6-inch low speed wind tunnel was made using Particle Image Velocimetry (PIV), and a turbulence generator consisting of a grid/honeycomb structure. Approximately 3000 digital PIV images were captured and analyzed covering an approximate 2 x 6-inch area along the centerline of the tunnel just beyond the turbulence generator system. Custom software for analysis and acquisition was developed for semi-automated digital PIV image acquisition and analysis. Comparisons between previously obtained LTA and LV turbulence measurements taken in the tunnel are presented.
PICKY: a novel SVD-based NMR spectra peak picking method
Alipanahi, Babak; Gao, Xin; Karakoc, Emre; Donaldson, Logan; Li, Ming
2009-01-01
Motivation: Picking peaks from experimental NMR spectra is a key unsolved problem for automated NMR protein structure determination. Such a process is a prerequisite for resonance assignment, nuclear overhauser enhancement (NOE) distance restraint assignment, and structure calculation tasks. Manual or semi-automatic peak picking, which is currently the prominent way used in NMR labs, is tedious, time consuming and costly. Results: We introduce new ideas, including noise-level estimation, component forming and sub-division, singular value decomposition (SVD)-based peak picking and peak pruning and refinement. PICKY is developed as an automated peak picking method. Different from the previous research on peak picking, we provide a systematic study of the proposed method. PICKY is tested on 32 real 2D and 3D spectra of eight target proteins, and achieves an average of 88% recall and 74% precision. PICKY is efficient. It takes PICKY on average 15.7 s to process an NMR spectrum. More important than these numbers, PICKY actually works in practice. We feed peak lists generated by PICKY to IPASS for resonance assignment, feed IPASS assignment to SPARTA for fragments generation, and feed SPARTA fragments to FALCON for structure calculation. This results in high-resolution structures of several proteins, for example, TM1112, at 1.25 Å. Availability: PICKY is available upon request. The peak lists of PICKY can be easily loaded by SPARKY to enable a better interactive strategy for rapid peak picking. Contact: mli@uwaterloo.ca PMID:19477998
AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART III: PROGRAM DOCUMENTATION
The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to proces...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-27
... generate and submit option quotations electronically through AUTOM in eligible options to which such SQT is... from the Exchange to generate and submit option quotations electronically through AUTOM in eligible...
Automated analysis of biological oscillator models using mode decomposition.
Konopka, Tomasz
2011-04-01
Oscillating signals produced by biological systems have shapes, described by their Fourier spectra, that can potentially reveal the mechanisms that generate them. Extracting this information from measured signals is interesting for the validation of theoretical models, discovery and classification of interaction types, and for optimal experiment design. An automated workflow is described for the analysis of oscillating signals. A software package is developed to match signal shapes to hundreds of a priori viable model structures defined by a class of first-order differential equations. The package computes parameter values for each model by exploiting the mode decomposition of oscillating signals and formulating the matching problem in terms of systems of simultaneous polynomial equations. On the basis of the computed parameter values, the software returns a list of models consistent with the data. In validation tests with synthetic datasets, it not only shortlists those model structures used to generate the data but also shows that excellent fits can sometimes be achieved with alternative equations. The listing of all consistent equations is indicative of how further invalidation might be achieved with additional information. When applied to data from a microarray experiment on mice, the procedure finds several candidate model structures to describe interactions related to the circadian rhythm. This shows that experimental data on oscillators is indeed rich in information about gene regulation mechanisms. The software package is available at http://babylone.ulb.ac.be/autoosc/.
Inferring ontology graph structures using OWL reasoning.
Rodríguez-García, Miguel Ángel; Hoehndorf, Robert
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Correction of spin diffusion during iterative automated NOE assignment
NASA Astrophysics Data System (ADS)
Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael
2004-04-01
Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.
Automated measurement of human body shape and curvature using computer vision
NASA Astrophysics Data System (ADS)
Pearson, Jeremy D.; Hobson, Clifford A.; Dangerfield, Peter H.
1993-06-01
A system to measure the surface shape of the human body has been constructed. The system uses a fringe pattern generated by projection of multi-stripe structured light. The optical methodology used is fully described and the algorithms used to process acquired digital images are outlined. The system has been applied to the measurement of the shape of the human back in scoliosis.
Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction
NASA Astrophysics Data System (ADS)
Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.
2013-12-01
We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.
Machine learning in a graph framework for subcortical segmentation
NASA Astrophysics Data System (ADS)
Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek
2017-02-01
Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).
Design automation techniques for custom LSI arrays
NASA Technical Reports Server (NTRS)
Feller, A.
1975-01-01
The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz
2008-01-01
In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.
Using Automated Theorem Provers to Certify Auto-Generated Aerospace Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann
2004-01-01
We describe a system for the automated certification of safety properties of NASA software. The system uses Hoare-style program verification technology to generate proof obligations which are then processed by an automated first-order theorem prover (ATP). For full automation, however, the obligations must be aggressively preprocessed and simplified We describe the unique requirements this places on the ATP and demonstrate how the individual simplification stages, which are implemented by rewriting, influence the ability of the ATP to solve the proof tasks. Experiments on more than 25,000 tasks were carried out using Vampire, Spass, and e-setheo.
NASA Technical Reports Server (NTRS)
Chapman, K. B.; Cox, C. M.; Thomas, C. W.; Cuevas, O. O.; Beckman, R. M.
1994-01-01
The Flight Dynamics Facility (FDF) at the NASA Goddard Space Flight Center (GSFC) generates numerous products for NASA-supported spacecraft, including the Tracking and Data Relay Satellites (TDRS's), the Hubble Space Telescope (HST), the Extreme Ultraviolet Explorer (EUVE), and the space shuttle. These products include orbit determination data, acquisition data, event scheduling data, and attitude data. In most cases, product generation involves repetitive execution of many programs. The increasing number of missions supported by the FDF has necessitated the use of automated systems to schedule, execute, and quality assure these products. This automation allows the delivery of accurate products in a timely and cost-efficient manner. To be effective, these systems must automate as many repetitive operations as possible and must be flexible enough to meet changing support requirements. The FDF Orbit Determination Task (ODT) has implemented several systems that automate product generation and quality assurance (QA). These systems include the Orbit Production Automation System (OPAS), the New Enhanced Operations Log (NEOLOG), and the Quality Assurance Automation Software (QA Tool). Implementation of these systems has resulted in a significant reduction in required manpower, elimination of shift work and most weekend support, and improved support quality, while incurring minimal development cost. This paper will present an overview of the concepts used and experiences gained from the implementation of these automation systems.
Automated Assessment in Massive Open Online Courses
ERIC Educational Resources Information Center
Ivaniushin, Dmitrii A.; Shtennikov, Dmitrii G.; Efimchick, Eugene A.; Lyamin, Andrey V.
2016-01-01
This paper describes an approach to use automated assessments in online courses. Open edX platform is used as the online courses platform. The new assessment type uses Scilab as learning and solution validation tool. This approach allows to use automated individual variant generation and automated solution checks without involving the course…
Formal Safety Certification of Aerospace Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
In principle, formal methods offer many advantages for aerospace software development: they can help to achieve ultra-high reliability, and they can be used to provide evidence of the reliability claims which can then be subjected to external scrutiny. However, despite years of research and many advances in the underlying formalisms of specification, semantics, and logic, formal methods are not much used in practice. In our opinion this is related to three major shortcomings. First, the application of formal methods is still expensive because they are labor- and knowledge-intensive. Second, they are difficult to scale up to complex systems because they are based on deep mathematical insights about the behavior of the systems (t.e., they rely on the "heroic proof"). Third, the proofs can be difficult to interpret, and typically stand in isolation from the original code. In this paper, we describe a tool for formally demonstrating safety-relevant aspects of aerospace software, which largely circumvents these problems. We focus on safely properties because it has been observed that safety violations such as out-of-bounds memory accesses or use of uninitialized variables constitute the majority of the errors found in the aerospace domain. In our approach, safety means that the program will not violate a set of rules that can range for the simple memory access rules to high-level flight rules. These different safety properties are formalized as different safety policies in Hoare logic, which are then used by a verification condition generator along with the code and logical annotations in order to derive formal safety conditions; these are then proven using an automated theorem prover. Our certification system is currently integrated into a model-based code generation toolset that generates the annotations together with the code. However, this automated formal certification technology is not exclusively constrained to our code generator and could, in principle, also be integrated with other code generators such as RealTime Workshop or even applied to legacy code. Our approach circumvents the historical problems with formal methods by increasing the degree of automation on all levels. The restriction to safety policies (as opposed to arbitrary functional behavior) results in simpler proof problems that can generally be solved by fully automatic theorem proves. An automated linking mechanism between the safety conditions and the code provides some of the traceability mandated by process standards such as DO-178B. An automated explanation mechanism uses semantic markup added by the verification condition generator to produce natural-language explanations of the safety conditions and thus supports their interpretation in relation to the code. It shows an automatically generated certification browser that lets users inspect the (generated) code along with the safety conditions (including textual explanations), and uses hyperlinks to automate tracing between the two levels. Here, the explanations reflect the logical structure of the safety obligation but the mechanism can in principle be customized using different sets of domain concepts. The interface also provides some limited control over the certification process itself. Our long-term goal is a seamless integration of certification, code generation, and manual coding that results in a "certified pipeline" in which specifications are automatically transformed into executable code, together with the supporting artifacts necessary for achieving and demonstrating the high level of assurance needed in the aerospace domain.
Jo, Sunhwan; Song, Kevin C.; Desaire, Heather; MacKerell, Alexander D.; Im, Wonpil
2011-01-01
Understanding how glycosylation affects protein structure, dynamics, and function is an emerging and challenging problem in biology. As a first step toward glycan modeling in the context of structural glycobiology, we have developed Glycan Reader and integrated it into the CHARMM-GUI, http://www.charmm-gui.org/input/glycan. Glycan Reader greatly simplifies the reading of PDB structure files containing glycans through (i) detection of carbohydrate molecules, (ii) automatic annotation of carbohydrates based on their three-dimensional structures, (iii) recognition of glycosidic linkages between carbohydrates as well as N-/O-glycosidic linkages to proteins, and (iv) generation of inputs for the biomolecular simulation program CHARMM with the proper glycosidic linkage setup. In addition, Glycan Reader is linked to other functional modules in CHARMM-GUI, allowing users to easily generate carbohydrate or glycoprotein molecular simulation systems in solution or membrane environments and visualize the electrostatic potential on glycoprotein surfaces. These tools are useful for studying the impact of glycosylation on protein structure and dynamics. PMID:21815173
Rigden, Daniel J; Thomas, Jens M H; Simkovic, Felix; Simpkin, Adam; Winn, Martyn D; Mayans, Olga; Keegan, Ronan M
2018-03-01
Molecular replacement (MR) is the predominant route to solution of the phase problem in macromolecular crystallography. Although routine in many cases, it becomes more effortful and often impossible when the available experimental structures typically used as search models are only distantly homologous to the target. Nevertheless, with current powerful MR software, relatively small core structures shared between the target and known structure, of 20-40% of the overall structure for example, can succeed as search models where they can be isolated. Manual sculpting of such small structural cores is rarely attempted and is dependent on the crystallographer's expertise and understanding of the protein family in question. Automated search-model editing has previously been performed on the basis of sequence alignment, in order to eliminate, for example, side chains or loops that are not present in the target, or on the basis of structural features (e.g. solvent accessibility) or crystallographic parameters (e.g. B factors). Here, based on recent work demonstrating a correlation between evolutionary conservation and protein rigidity/packing, novel automated ways to derive edited search models from a given distant homologue over a range of sizes are presented. A variety of structure-based metrics, many readily obtained from online webservers, can be fed to the MR pipeline AMPLE to produce search models that succeed with a set of test cases where expertly manually edited comparators, further processed in diverse ways with MrBUMP, fail. Further significant performance gains result when the structure-based distance geometry method CONCOORD is used to generate ensembles from the distant homologue. To our knowledge, this is the first such approach whereby a single structure is meaningfully transformed into an ensemble for the purposes of MR. Additional cases further demonstrate the advantages of the approach. CONCOORD is freely available and computationally inexpensive, so these novel methods offer readily available new routes to solve difficult MR cases.
Simpkin, Adam; Mayans, Olga; Keegan, Ronan M.
2018-01-01
Molecular replacement (MR) is the predominant route to solution of the phase problem in macromolecular crystallography. Although routine in many cases, it becomes more effortful and often impossible when the available experimental structures typically used as search models are only distantly homologous to the target. Nevertheless, with current powerful MR software, relatively small core structures shared between the target and known structure, of 20–40% of the overall structure for example, can succeed as search models where they can be isolated. Manual sculpting of such small structural cores is rarely attempted and is dependent on the crystallographer’s expertise and understanding of the protein family in question. Automated search-model editing has previously been performed on the basis of sequence alignment, in order to eliminate, for example, side chains or loops that are not present in the target, or on the basis of structural features (e.g. solvent accessibility) or crystallographic parameters (e.g. B factors). Here, based on recent work demonstrating a correlation between evolutionary conservation and protein rigidity/packing, novel automated ways to derive edited search models from a given distant homologue over a range of sizes are presented. A variety of structure-based metrics, many readily obtained from online webservers, can be fed to the MR pipeline AMPLE to produce search models that succeed with a set of test cases where expertly manually edited comparators, further processed in diverse ways with MrBUMP, fail. Further significant performance gains result when the structure-based distance geometry method CONCOORD is used to generate ensembles from the distant homologue. To our knowledge, this is the first such approach whereby a single structure is meaningfully transformed into an ensemble for the purposes of MR. Additional cases further demonstrate the advantages of the approach. CONCOORD is freely available and computationally inexpensive, so these novel methods offer readily available new routes to solve difficult MR cases. PMID:29533226
A Hybrid Approach for the Automated Finishing of Bacterial Genomes
Robins, William P.; Chin, Chen-Shan; Webster, Dale; Paxinos, Ellen; Hsu, David; Ashby, Meredith; Wang, Susana; Peluso, Paul; Sebra, Robert; Sorenson, Jon; Bullard, James; Yen, Jackie; Valdovino, Marie; Mollova, Emilia; Luong, Khai; Lin, Steven; LaMay, Brianna; Joshi, Amruta; Rowe, Lori; Frace, Michael; Tarr, Cheryl L.; Turnsek, Maryann; Davis, Brigid M; Kasarskis, Andrew; Mekalanos, John J.; Waldor, Matthew K.; Schadt, Eric E.
2013-01-01
Dramatic improvements in DNA sequencing technology have revolutionized our ability to characterize most genomic diversity. However, accurate resolution of large structural events has remained challenging due to the comparatively shorter read lengths of second-generation technologies. Emerging third-generation sequencing technologies, which yield markedly increased read length on rapid time scales and for low cost, have the potential to address assembly limitations. Here we combine sequencing data from second- and third-generation DNA sequencing technologies to assemble the two-chromosome genome of a recent Haitian cholera outbreak strain into two nearly finished contigs at > 99.9% accuracy. Complex regions with clinically significant structure were completely resolved. In separate control assemblies on experimental and simulated data for the canonical N16961 reference we obtain 14 and 8 scaffolds greater than 1kb, respectively, correcting several errors in the underlying source data. This work provides a blueprint for the next generation of rapid microbial identification and full-genome assembly. PMID:22750883
NASA Technical Reports Server (NTRS)
Kamhawi, Hilmi N.
2012-01-01
This report documents the work performed from March 2010 to March 2012. The Integrated Design and Engineering Analysis (IDEA) environment is a collaborative environment based on an object-oriented, multidisciplinary, distributed framework using the Adaptive Modeling Language (AML) as a framework and supporting the configuration design and parametric CFD grid generation. This report will focus on describing the work in the area of parametric CFD grid generation using novel concepts for defining the interaction between the mesh topology and the geometry in such a way as to separate the mesh topology from the geometric topology while maintaining the link between the mesh topology and the actual geometry.
Automatically generated code for relativistic inhomogeneous cosmologies
NASA Astrophysics Data System (ADS)
Bentivegna, Eloisa
2017-02-01
The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.
Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter
NASA Technical Reports Server (NTRS)
Belknap, Shannon; Zhang, Michael
2013-01-01
The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.
NMR-based automated protein structure determination.
Würz, Julia M; Kazemi, Sina; Schmidt, Elena; Bagaria, Anurag; Güntert, Peter
2017-08-15
NMR spectra analysis for protein structure determination can now in many cases be performed by automated computational methods. This overview of the computational methods for NMR protein structure analysis presents recent automated methods for signal identification in multidimensional NMR spectra, sequence-specific resonance assignment, collection of conformational restraints, and structure calculation, as implemented in the CYANA software package. These algorithms are sufficiently reliable and integrated into one software package to enable the fully automated structure determination of proteins starting from NMR spectra without manual interventions or corrections at intermediate steps, with an accuracy of 1-2 Å backbone RMSD in comparison with manually solved reference structures. Copyright © 2017 Elsevier Inc. All rights reserved.
Bootstrapped Learning Analysis and Curriculum Development Environment (BLADE)
2012-02-01
framework Development of the automated teacher The software development aspect of the BL program was conducted primarily in the Java programming...parameters are analogous to Java class data members or to fields in a C structure. Here is an example composite IL object from Blocks World, an...2 and 3, alternative methods of implementing generators were developed, first in Java , later in Ruby. Both of these alternatives lowered the
Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data
NASA Astrophysics Data System (ADS)
Singh, Vishwajit
2016-04-01
This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
PLIP: fully automated protein-ligand interaction profiler.
Salentin, Sebastian; Schreiber, Sven; Haupt, V Joachim; Adasme, Melissa F; Schroeder, Michael
2015-07-01
The characterization of interactions in protein-ligand complexes is essential for research in structural bioinformatics, drug discovery and biology. However, comprehensive tools are not freely available to the research community. Here, we present the protein-ligand interaction profiler (PLIP), a novel web service for fully automated detection and visualization of relevant non-covalent protein-ligand contacts in 3D structures, freely available at projects.biotec.tu-dresden.de/plip-web. The input is either a Protein Data Bank structure, a protein or ligand name, or a custom protein-ligand complex (e.g. from docking). In contrast to other tools, the rule-based PLIP algorithm does not require any structure preparation. It returns a list of detected interactions on single atom level, covering seven interaction types (hydrogen bonds, hydrophobic contacts, pi-stacking, pi-cation interactions, salt bridges, water bridges and halogen bonds). PLIP stands out by offering publication-ready images, PyMOL session files to generate custom images and parsable result files to facilitate successive data processing. The full python source code is available for download on the website. PLIP's command-line mode allows for high-throughput interaction profiling. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Measuring Performance with Library Automated Systems.
ERIC Educational Resources Information Center
OFarrell, John P.
2000-01-01
Investigates the capability of three library automated systems to generate some of the datasets necessary to form the ISO (International Standards Organization) standard on performance measurement within libraries, based on research in Liverpool John Moores University (United Kingdom). Concludes that the systems are weak in generating the…
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
Biofabricated constructs as tissue models: a short review.
Costa, Pedro F
2015-04-01
Biofabrication is currently able to provide reliable models for studying the development of cells and tissues into multiple environments. As the complexity of biofabricated constructs is becoming increasingly higher their ability to closely mimic native tissues and organs is also increasing. Various biofabrication technologies currently allow to precisely build cell/tissue constructs at multiple dimension ranges with great accuracy. Such technologies are also able to assemble together multiple types of cells and/or materials and generate constructs closely mimicking various types of tissues. Furthermore, the high degree of automation involved in these technologies enables the study of large arrays of testing conditions within increasingly smaller and automated devices both in vitro and in vivo. Despite not yet being able to generate constructs similar to complex tissues and organs, biofabrication is rapidly evolving in that direction. One major hurdle to be overcome in order for such level of complex detail to be achieved is the ability to generate complex vascular structures within biofabricated constructs. This review describes several of the most relevant technologies and methodologies currently utilized within biofabrication and provides as well a brief overview of their current and future potential applications.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
Kurczynska, Monika; Kotulska, Malgorzata
2018-01-01
Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters-the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models.
Kurczynska, Monika
2018-01-01
Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters–the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models. PMID:29787567
2012-01-01
Background The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. Results Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related—a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. Conclusions This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. PMID:22726767
Lin, Mai; Ranganathan, David; Mori, Tetsuya; Hagooly, Aviv; Rossin, Raffaella; Welch, Michael J; Lapi, Suzanne E
2012-10-01
Interest in using (68)Ga is rapidly increasing for clinical PET applications due to its favorable imaging characteristics and increased accessibility. The focus of this study was to provide our long-term evaluations of the two TiO(2)-based (68)Ge/(68)Ga generators and develop an optimized automation strategy to synthesize [(68)Ga]DOTATOC by using HEPES as a buffer system. This data will be useful in standardizing the evaluation of (68)Ge/(68)Ga generators and automation strategies to comply with regulatory issues for clinical use. Copyright © 2012 Elsevier Ltd. All rights reserved.
Antony, Bhavna Josephine; Kim, Byung-Jin; Lang, Andrew; Carass, Aaron; Prince, Jerry L; Zack, Donald J
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study.
Lang, Andrew; Carass, Aaron; Prince, Jerry L.; Zack, Donald J.
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study. PMID:28817571
An Intelligent Automation Platform for Rapid Bioprocess Design.
Wu, Tianyi; Zhou, Yuhong
2014-08-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buurma, Christopher; Sen, Fatih G.; Paulauskas, Tadas
2015-01-01
Grain boundaries (GB) in poly-CdTe solar cells play an important role in species diffusion, segregation, defect formation, and carrier recombination. While the creation of specific high-symmetry interfaces can be straight forward, the creation of general GB structures in many material systems is difficult if periodic boundary conditions are to be enforced. Here we describe a novel algorithm and implementation to generate initial general GB structures for CdTe in an automated way, and we investigate some of these structures using density functional theory (DFT). Example structures include those with bi-crystals already fabricated for comparison, and those planning to be investigated inmore » the future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkel, D; Bol, GH; Asselen, B van
Purpose: To develop an automated radiotherapy treatment planning and optimization workflow for prostate cancer in order to generate clinical treatment plans. Methods: A fully automated radiotherapy treatment planning and optimization workflow was developed based on the treatment planning system Monaco (Elekta AB, Stockholm, Sweden). To evaluate our method, a retrospective planning study (n=100) was performed on patients treated for prostate cancer with 5 field intensity modulated radiotherapy, receiving a dose of 35×2Gy to the prostate and vesicles and a simultaneous integrated boost of 35×0.2Gy to the prostate only. A comparison was made between the dosimetric values of the automatically andmore » manually generated plans. Operator time to generate a plan and plan efficiency was measured. Results: A comparison of the dosimetric values show that automatically generated plans yield more beneficial dosimetric values. In automatic plans reductions of 43% in the V72Gy of the rectum and 13% in the V72Gy of the bladder are observed when compared to the manually generated plans. Smaller variance in dosimetric values is seen, i.e. the intra- and interplanner variability is decreased. For 97% of the automatically generated plans and 86% of the clinical plans all criteria for target coverage and organs at risk constraints are met. The amount of plan segments and monitor units is reduced by 13% and 9% respectively. Automated planning requires less than one minute of operator time compared to over an hour for manual planning. Conclusion: The automatically generated plans are highly suitable for clinical use. The plans have less variance and a large gain in time efficiency has been achieved. Currently, a pilot study is performed, comparing the preference of the clinician and clinical physicist for the automatic versus manual plan. Future work will include expanding our automated treatment planning method to other tumor sites and develop other automated radiotherapy workflows.« less
Automated choroid segmentation based on gradual intensity distance in HD-OCT images.
Chen, Qiang; Fan, Wen; Niu, Sijie; Shi, Jiajia; Shen, Honglie; Yuan, Songtao
2015-04-06
The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch's membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.
Automated selection of stabilizing mutations in designed and natural proteins.
Borgo, Benjamin; Havranek, James J
2012-01-31
The ability to engineer novel protein folds, conformations, and enzymatic activities offers enormous potential for the development of new protein therapeutics and biocatalysts. However, many de novo and redesigned proteins exhibit poor hydrophobic packing in their predicted structures, leading to instability or insolubility. The general utility of rational, structure-based design would greatly benefit from an improved ability to generate well-packed conformations. Here we present an automated protocol within the RosettaDesign framework that can identify and improve poorly packed protein cores by selecting a series of stabilizing point mutations. We apply our method to previously characterized designed proteins that exhibited a decrease in stability after a full computational redesign. We further demonstrate the ability of our method to improve the thermostability of a well-behaved native protein. In each instance, biophysical characterization reveals that we were able to stabilize the original proteins against chemical and thermal denaturation. We believe our method will be a valuable tool for both improving upon designed proteins and conferring increased stability upon native proteins.
Automated selection of stabilizing mutations in designed and natural proteins
Borgo, Benjamin; Havranek, James J.
2012-01-01
The ability to engineer novel protein folds, conformations, and enzymatic activities offers enormous potential for the development of new protein therapeutics and biocatalysts. However, many de novo and redesigned proteins exhibit poor hydrophobic packing in their predicted structures, leading to instability or insolubility. The general utility of rational, structure-based design would greatly benefit from an improved ability to generate well-packed conformations. Here we present an automated protocol within the RosettaDesign framework that can identify and improve poorly packed protein cores by selecting a series of stabilizing point mutations. We apply our method to previously characterized designed proteins that exhibited a decrease in stability after a full computational redesign. We further demonstrate the ability of our method to improve the thermostability of a well-behaved native protein. In each instance, biophysical characterization reveals that we were able to stabilize the original proteins against chemical and thermal denaturation. We believe our method will be a valuable tool for both improving upon designed proteins and conferring increased stability upon native proteins. PMID:22307603
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
NASA Astrophysics Data System (ADS)
Hasan, M.; Helal, A.; Gabr, M.
2014-12-01
In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.
12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure
Code of Federal Regulations, 2012 CFR
2012-01-01
.../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...
12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure
Code of Federal Regulations, 2011 CFR
2011-01-01
.../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...
12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure
Code of Federal Regulations, 2014 CFR
2014-01-01
.../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...
12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure
Code of Federal Regulations, 2013 CFR
2013-01-01
.../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...
Automated Tutoring in Interactive Environments: A Task-Centered Approach.
ERIC Educational Resources Information Center
Wolz, Ursula; And Others
1989-01-01
Discusses tutoring and consulting functions in interactive computer environments. Tutoring strategies are considered, the expert model and the user model are described, and GENIE (Generated Informative Explanations)--an answer generating system for the Berkeley Unix Mail system--is explained as an example of an automated consulting system. (33…
DOT National Transportation Integrated Search
2006-11-01
This is the final report of an 18-month project to: (1) review Next Generation Air Transportation System (NGATS) Joint Planning and Development Office (JPDO) documents as they pertain to human-automation interaction; (2) review past system failures i...
This paper describes an automated system for the oxidation state specific speciation of inorganic and methylated arsenicals by selective hydride generation - cryotrapping- gas chromatography - atomic absorption spectrometry with the multiatomizer. The corresponding arsines are ge...
Automated Planning and Scheduling for Planetary Rover Distributed Operations
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Rabideau, Gregg; Tso, Kam S.; Chien, Steve
1999-01-01
Automated planning and Scheduling, including automated path planning, has been integrated with an Internet-based distributed operations system for planetary rover operations. The resulting prototype system enables faster generation of valid rover command sequences by a distributed planetary rover operations team. The Web Interface for Telescience (WITS) provides Internet-based distributed collaboration, the Automated Scheduling and Planning Environment (ASPEN) provides automated planning and scheduling, and an automated path planner provided path planning. The system was demonstrated on the Rocky 7 research rover at JPL.
Automated feature extraction for retinal vascular biometry in zebrafish using OCT angiography
NASA Astrophysics Data System (ADS)
Bozic, Ivan; Rao, Gopikrishna M.; Desai, Vineet; Tao, Yuankai K.
2017-02-01
Zebrafish have been identified as an ideal model for angiogenesis because of anatomical and functional similarities with other vertebrates. The scale and complexity of zebrafish assays are limited by the need to manually treat and serially screen animals, and recent technological advances have focused on automation and improving throughput. Here, we use optical coherence tomography (OCT) and OCT angiography (OCT-A) to perform noninvasive, in vivo imaging of retinal vasculature in zebrafish. OCT-A summed voxel projections were low pass filtered and skeletonized to create an en face vascular map prior to connectivity analysis. Vascular segmentation was referenced to the optic nerve head (ONH), which was identified by automatically segmenting the retinal pigment epithelium boundary on the OCT structural volume. The first vessel branch generation was identified as skeleton segments with branch points closest to the ONH, and subsequent generations were found iteratively by expanding the search space outwards from the ONH. Biometric parameters, including length, curvature, and branch angle of each vessel segment were calculated and grouped by branch generation. Despite manual handling and alignment of each animal over multiple time points, we observe distinct qualitative patterns that enable unique identification of each eye from individual animals. We believe this OCT-based retinal biometry method can be applied for automated animal identification and handling in high-throughput organism-level pharmacological assays and genetic screens. In addition, these extracted features may enable high-resolution quantification of longitudinal vascular changes as a method for studying zebrafish models of retinal neovascularization and vascular remodeling.
An ontology-driven tool for structured data acquisition using Web forms.
Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A
2017-08-01
Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.
Automated Tape Laying Machine for Composite Structures.
The invention comprises an automated tape laying machine, for laying tape on a composite structure. The tape laying machine has a tape laying head...neatly cut. The automated tape laying device utilizes narrow width tape to increase machine flexibility and reduce wastage.
A Recommendation Algorithm for Automating Corollary Order Generation
Klann, Jeffrey; Schadow, Gunther; McCoy, JM
2009-01-01
Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875
A recommendation algorithm for automating corollary order generation.
Klann, Jeffrey; Schadow, Gunther; McCoy, J M
2009-11-14
Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.
Semi-automated ontology generation and evolution
NASA Astrophysics Data System (ADS)
Stirtzinger, Anthony P.; Anken, Craig S.
2009-05-01
Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill. This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus) and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships. The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to extrapolate semantic relationships between domain entities. The combination of all these components results in an innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology in the context of the architectural components referenced above and identify a potential technology transition path to Scott AFB's Tanker Airlift Control Center (TACC) which serves as the Air Operations Center (AOC) for the Air Mobility Command (AMC).
ASTROS: A multidisciplinary automated structural design tool
NASA Technical Reports Server (NTRS)
Neill, D. J.
1989-01-01
ASTROS (Automated Structural Optimization System) is a finite-element-based multidisciplinary structural optimization procedure developed under Air Force sponsorship to perform automated preliminary structural design. The design task is the determination of the structural sizes that provide an optimal structure while satisfying numerous constraints from many disciplines. In addition to its automated design features, ASTROS provides a general transient and frequency response capability, as well as a special feature to perform a transient analysis of a vehicle subjected to a nuclear blast. The motivation for the development of a single multidisciplinary design tool is that such a tool can provide improved structural designs in less time than is currently needed. The role of such a tool is even more apparent as modern materials come into widespread use. Balancing conflicting requirements for the structure's strength and stiffness while exploiting the benefits of material anisotropy is perhaps an impossible task without assistance from an automated design tool. Finally, the use of a single tool can bring the design task into better focus among design team members, thereby improving their insight into the overall task.
Automated generation of radical species in crystalline carbohydrate using ab initio MD simulations.
Aalbergsjø, Siv G; Pauwels, Ewald; Van Yperen-De Deyne, Andy; Van Speybroeck, Veronique; Sagstuen, Einar
2014-08-28
As the chemical structures of radiation damaged molecules may differ greatly from their undamaged counterparts, investigation and description of radiation damaged structures is commonly biased by the researcher. Radical formation from ionizing radiation in crystalline α-l-rhamnose monohydrate has been investigated using a new method where the selection of radical structures is unbiased by the researcher. The method is based on using ab initio molecular dynamics (MD) studies to investigate how ionization damage can form, change and move. Diversity in the radical production is gained by using different points on the potential energy surface of the intact crystal as starting points for the ionizations and letting the initial velocities of the nuclei after ionization be generated randomly. 160 ab initio MD runs produced 12 unique radical structures for investigation. Out of these, 7 of the potential products have never previously been discussed, and 3 products are found to match with radicals previously observed by electron magnetic resonance experiments.
Automated generation of image products for Mars Exploration Rover Mission tactical operations
NASA Technical Reports Server (NTRS)
Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen
2005-01-01
This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.
Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.
2013-01-01
We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid
2016-01-01
Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland-Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid
2016-01-01
Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947
Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul
2016-09-29
The generation of Everglades Depth Estimation Network (EDEN) daily water-level and water-depth maps is dependent on high quality real-time data from over 240 water-level stations. To increase the accuracy of the daily water-surface maps, the Automated Data Assurance and Management (ADAM) tool was created by the U.S. Geological Survey as part of Greater Everglades Priority Ecosystems Science. The ADAM tool is used to provide accurate quality-assurance review of the real-time data from the EDEN network and allows estimation or replacement of missing or erroneous data. This user’s manual describes how to install and operate the ADAM software. File structure and operation of the ADAM software is explained using examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
NASA Astrophysics Data System (ADS)
Cao, Haotian; Song, Xiaolin; Zhao, Song; Bao, Shan; Huang, Zhi
2017-08-01
Automated driving has received a broad of attentions from the academia and industry, since it is effective to greatly reduce the severity of potential traffic accidents and achieve the ultimate automobile safety and comfort. This paper presents an optimal model-based trajectory following architecture for highly automated vehicle in its driving tasks such as automated guidance or lane keeping, which includes a velocity-planning module, a steering controller and a velocity-tracking controller. The velocity-planning module considering the optimal time-consuming and passenger comforts simultaneously could generate a smooth velocity profile. The robust sliding mode control (SMC) steering controller with adaptive preview time strategy could not only track the target path well, but also avoid a big lateral acceleration occurred in its path-tracking progress due to a fuzzy-adaptive preview time mechanism introduced. In addition, an SMC controller with input-output linearisation method for velocity tracking is built and validated. Simulation results show this trajectory following architecture are effective and feasible for high automated driving vehicle, comparing with the Driver-in-the-Loop simulations performed by an experienced driver and novice driver, respectively. The simulation results demonstrate that the present trajectory following architecture could plan a satisfying longitudinal speed profile, track the target path well and safely when dealing with different road geometry structure, it ensures a good time efficiency and driving comfort simultaneously.
Automated MAD and MIR structure solution
Terwilliger, Thomas C.; Berendzen, Joel
1999-01-01
Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations. PMID:10089316
An Intelligent Automation Platform for Rapid Bioprocess Design
Wu, Tianyi
2014-01-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579
Automated Tetrahedral Mesh Generation for CFD Analysis of Aircraft in Conceptual Design
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu; Campbell, Richard L.
2014-01-01
The paper introduces an automation process of generating a tetrahedral mesh for computational fluid dynamics (CFD) analysis of aircraft configurations in early conceptual design. The method was developed for CFD-based sonic boom analysis of supersonic configurations, but can be applied to aerodynamic analysis of aircraft configurations in any flight regime.
USDA-ARS?s Scientific Manuscript database
Using next-generation-sequencing technology to assess entire transcriptomes requires high quality starting RNA. Currently, RNA quality is routinely judged using automated microfluidic gel electrophoresis platforms and associated algorithms. Here we report that such automated methods generate false-n...
NASA Astrophysics Data System (ADS)
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.
JWST Associations overview: automated generation of combined products
NASA Astrophysics Data System (ADS)
Alexov, Anastasia; Swade, Daryl; Bushouse, Howard; Diaz, Rosa; Eisenhamer, Jonathan; Hack, Warren; Kyprianou, Mark; Levay, Karen; Rahmani, Christopher; Swam, Mike; Valenti, Jeff
2018-01-01
We are presenting the design of the James Webb Space Telescope (JWST) Data Management System (DMS) automated processing of Associations. An Association captures the relationship between exposures and higher level data products, such as combined mosaics created from dithered and tiled observations. The astronomer’s intent is captured within the Proposal Planning System (PPS) and provided to DMS as candidate associations. These candidates are converted into Association Pools and Association Generator Tables that serve as input to automated processing which create the combined data products. Association Pools are generated to capture a list of exposures that could potentially form associations and provide relevant information about those exposures. The Association Generator using definitions on groupings creates one or more Association Tables from a single input Association Pool. Each Association Table defines a set of exposures to be combined and the ruleset of the combination to be performed; the calibration software creates Associated data products based on these input tables. The initial design produces automated Associations within a proposal. Additionally this JWST overall design is conducive to eventually produce Associations for observations from multiple proposals, similar to the Hubble Legacy Archive (HLA).
Sequence-of-events-driven automation of the deep space network
NASA Technical Reports Server (NTRS)
Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.
1996-01-01
In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.
Sequence-of-Events-Driven Automation of the Deep Space Network
NASA Technical Reports Server (NTRS)
Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.
1996-01-01
In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.
Ontology-Based Multiple Choice Question Generation
Al-Yahya, Maha
2014-01-01
With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937
Quasi-three-dimensional particle imaging with digital holography.
Kemppinen, Osku; Heinson, Yuli; Berg, Matthew
2017-05-01
In this work, approximate three-dimensional structures of microparticles are generated with digital holography using an automated focus method. This is done by stacking a collection of silhouette-like images of a particle reconstructed from a single in-line hologram. The method enables estimation of the particle size in the longitudinal and transverse dimensions. Using the discrete dipole approximation, the method is tested computationally by simulating holograms for a variety of particles and attempting to reconstruct the known three-dimensional structure. It is found that poor longitudinal resolution strongly perturbs the reconstructed structure, yet the method does provide an approximate sense for the structure's longitudinal dimension. The method is then applied to laboratory measurements of holograms of single microparticles and their scattering patterns.
A Case Study of Reverse Engineering Integrated in an Automated Design Process
NASA Astrophysics Data System (ADS)
Pescaru, R.; Kyratsis, P.; Oancea, G.
2016-11-01
This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.
Automated Fabrication Technologies for High Performance Polymer Composites
NASA Technical Reports Server (NTRS)
Shuart , M. J.; Johnston, N. J.; Dexter, H. B.; Marchello, J. M.; Grenoble, R. W.
1998-01-01
New fabrication technologies are being exploited for building high graphite-fiber-reinforced composite structure. Stitched fiber preforms and resin film infusion have been successfully demonstrated for large, composite wing structures. Other automatic processes being developed include automated placement of tacky, drapable epoxy towpreg, automated heated head placement of consolidated ribbon/tape, and vacuum-assisted resin transfer molding. These methods have the potential to yield low cost high performance structures by fabricating composite structures to net shape out-of-autoclave.
NASA Astrophysics Data System (ADS)
Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.
2006-03-01
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
A compendium of controlled diffusion blades generated by an automated inverse design procedure
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1989-01-01
A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
Haas, Brian J; Salzberg, Steven L; Zhu, Wei; Pertea, Mihaela; Allen, Jonathan E; Orvis, Joshua; White, Owen; Buell, C Robin; Wortman, Jennifer R
2008-01-01
EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation. PMID:18190707
Automation of Educational Tasks for Academic Radiology.
Lamar, David L; Richardson, Michael L; Carlson, Blake
2016-07-01
The process of education involves a variety of repetitious tasks. We believe that appropriate computer tools can automate many of these chores, and allow both educators and their students to devote a lot more of their time to actual teaching and learning. This paper details tools that we have used to automate a broad range of academic radiology-specific tasks on Mac OS X, iOS, and Windows platforms. Some of the tools we describe here require little expertise or time to use; others require some basic knowledge of computer programming. We used TextExpander (Mac, iOS) and AutoHotKey (Win) for automated generation of text files, such as resident performance reviews and radiology interpretations. Custom statistical calculations were performed using TextExpander and the Python programming language. A workflow for automated note-taking was developed using Evernote (Mac, iOS, Win) and Hazel (Mac). Automated resident procedure logging was accomplished using Editorial (iOS) and Python. We created three variants of a teaching session logger using Drafts (iOS) and Pythonista (iOS). Editorial and Drafts were used to create flashcards for knowledge review. We developed a mobile reference management system for iOS using Editorial. We used the Workflow app (iOS) to automatically generate a text message reminder for daily conferences. Finally, we developed two separate automated workflows-one with Evernote (Mac, iOS, Win) and one with Python (Mac, Win)-that generate simple automated teaching file collections. We have beta-tested these workflows, techniques, and scripts on several of our fellow radiologists. All of them expressed enthusiasm for these tools and were able to use one or more of them to automate their own educational activities. Appropriate computer tools can automate many educational tasks, and thereby allow both educators and their students to devote a lot more of their time to actual teaching and learning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Next Generation Loading System for Detonators and Primers
Designed , fabricated and installed next generation tooling to provide additional manufacturing capabilities for new detonators and other small...prototype munitions on automated, semi-automated and manual machines. Lead design effort, procured and installed a primary explosive Drying Oven for a pilot...facility. Designed , fabricated and installed a Primary Explosives Waste Treatment System in a pilot environmental processing facility. Designed
NASA Technical Reports Server (NTRS)
Hennrich, C. W.; Konrath, E. J., Jr.
1973-01-01
A basic automated substructure analysis capability for NASTRAN is presented which eliminates most of the logistical data handling and generation chores that are currently associated with the method. Rigid formats are proposed which will accomplish this using three new modules, all of which can be added to level 16 with a relatively small effort.
ClusPro: an automated docking and discrimination method for the prediction of protein complexes.
Comeau, Stephen R; Gatchell, David W; Vajda, Sandor; Camacho, Carlos J
2004-01-01
Predicting protein interactions is one of the most challenging problems in functional genomics. Given two proteins known to interact, current docking methods evaluate billions of docked conformations by simple scoring functions, and in addition to near-native structures yield many false positives, i.e. structures with good surface complementarity but far from the native. We have developed a fast algorithm for filtering docked conformations with good surface complementarity, and ranking them based on their clustering properties. The free energy filters select complexes with lowest desolvation and electrostatic energies. Clustering is then used to smooth the local minima and to select the ones with the broadest energy wells-a property associated with the free energy at the binding site. The robustness of the method was tested on sets of 2000 docked conformations generated for 48 pairs of interacting proteins. In 31 of these cases, the top 10 predictions include at least one near-native complex, with an average RMSD of 5 A from the native structure. The docking and discrimination method also provides good results for a number of complexes that were used as targets in the Critical Assessment of PRedictions of Interactions experiment. The fully automated docking and discrimination server ClusPro can be found at http://structure.bu.edu
What's New in the Library Automation Arena?
ERIC Educational Resources Information Center
Breeding, Marshall
1998-01-01
Reviews trends in library automation based on vendors at the 1998 American Library Association Annual Conference. Discusses the major industry trend, a move from host-based computer systems to the new generation of client/server, object-oriented, open systems-based automation. Includes a summary of developments for 26 vendors. (LRW)
Automating Document Delivery: A Conference Report.
ERIC Educational Resources Information Center
Ensor, Pat
1992-01-01
Describes presentations made at a forum on automation, interlibrary loan (ILL), and document delivery sponsored by the Houston Area Library Consortium. Highlights include access versus ownership; software for ILL; fee-based services; automated management systems for ILL; and electronic mail and online systems for end-user-generated ILL requests.…
ERIC Educational Resources Information Center
Harik, Polina; Baldwin, Peter; Clauser, Brian
2013-01-01
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…
NASA Technical Reports Server (NTRS)
Maille, Nicolas P.; Statler, Irving C.; Ferryman, Thomas A.; Rosenthal, Loren; Shafto, Michael G.; Statler, Irving C.
2006-01-01
The objective of the Aviation System Monitoring and Modeling (ASMM) project of NASA s Aviation Safety and Security Program was to develop technologies that will enable proactive management of safety risk, which entails identifying the precursor events and conditions that foreshadow most accidents. This presents a particular challenge in the aviation system where people are key components and human error is frequently cited as a major contributing factor or cause of incidents and accidents. In the aviation "world", information about what happened can be extracted from quantitative data sources, but the experiential account of the incident reporter is the best available source of information about why an incident happened. This report describes a conceptual model and an approach to automated analyses of textual data sources for the subjective perspective of the reporter of the incident to aid in understanding why an incident occurred. It explores a first-generation process for routinely searching large databases of textual reports of aviation incident or accidents, and reliably analyzing them for causal factors of human behavior (the why of an incident). We have defined a generic structure of information that is postulated to be a sound basis for defining similarities between aviation incidents. Based on this structure, we have introduced the simplifying structure, which we call the Scenario as a pragmatic guide for identifying similarities of what happened based on the objective parameters that define the Context and the Outcome of a Scenario. We believe that it will be possible to design an automated analysis process guided by the structure of the Scenario that will aid aviation-safety experts to understand the systemic issues that are conducive to human error.
A modular assembling platform for manufacturing of microsystems by optical tweezers
NASA Astrophysics Data System (ADS)
Ksouri, Sarah Isabelle; Aumann, Andreas; Ghadiri, Reza; Prüfer, Michael; Baer, Sebastian; Ostendorf, Andreas
2013-09-01
Due to the increased complexity in terms of materials and geometries for microsystems new assembling techniques are required. Assembling techniques from the semiconductor industry are often very specific and cannot fulfill all specifications in more complex microsystems. Therefore, holographic optical tweezers are applied to manipulate structures in micrometer range with highest flexibility and precision. As is well known non-spherical assemblies can be trapped and controlled by laser light and assembled with an additional light modulator application, where the incident laser beam is rearranged into flexible light patterns in order to generate multiple spots. The complementary building blocks are generated by a two-photon-polymerization process. The possibilities of manufacturing arbitrary microstructures and the potential of optical tweezers lead to the idea of combining manufacturing techniques with manipulation processes to "microrobotic" processes. This work presents the manipulation of generated complex microstructures with optical tools as well as a storage solution for 2PP assemblies. A sample holder has been developed for the manual feeding of 2PP building blocks. Furthermore, a modular assembling platform has been constructed for an `all-in-one' 2PP manufacturing process as a dedicated storage system. The long-term objective is the automation process of feeding and storage of several different 2PP micro-assemblies to realize an automated assembly process.
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
Towards Evolving Electronic Circuits for Autonomous Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris
2000-01-01
The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.
NASA Astrophysics Data System (ADS)
Vitásek, Stanislav; Matějka, Petr
2017-09-01
The article deals with problematic parts of automated processing of quantity takeoff (QTO) from data generated in BIM model. It focuses on models of road constructions, and uses volumes and dimensions of excavation work to create an estimate of construction costs. The article uses a case study and explorative methods to discuss possibilities and problems of data transfer from a model to a price system of construction production when such transfer is used for price estimates of construction works. Current QTOs and price tenders are made with 2D documents. This process is becoming obsolete because more modern tools can be used. The BIM phenomenon enables partial automation in processing volumes and dimensions of construction units and matching the data to units in a given price scheme. Therefore price of construction can be estimated and structured without lengthy and often imprecise manual calculations. The use of BIM for QTO is highly dependent on local market budgeting systems, therefore proper push/pull strategy is required. It also requires proper requirements specification, compatible pricing database and software.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
NASA Technical Reports Server (NTRS)
Ciciora, J. A.; Leonard, S. D.; Johnson, N.; Amell, J.
1984-01-01
In order to derive general design guidelines for automated systems a study was conducted on the utilization and acceptance of existing automated systems as currently employed in several commercial fields. Four principal study area were investigated by means of structured interviews, and in some cases questionnaires. The study areas were aviation, a both scheduled airline and general commercial aviation; process control and factory applications; office automation; and automation in the power industry. The results of over eighty structured interviews were analyzed and responses categoried as various human factors issues for use by both designers and users of automated equipment. These guidelines address such items as general physical features of automated equipment; personnel orientation, acceptance, and training; and both personnel and system reliability.
Semi-Automated Trajectory Analysis of Deep Ballistic Penetrating Brain Injury
Folio, Les; Solomon, Jeffrey; Biassou, Nadia; Fischer, Tatjana; Dworzak, Jenny; Raymont, Vanessa; Sinaii, Ninet; Wassermann, Eric M.; Grafman, Jordan
2016-01-01
Background Penetrating head injuries (PHIs) are common in combat operations and most have visible wound paths on computed tomography (CT). Objective We assess agreement between an automated trajectory analysis-based assessment of brain injury and manual tracings of encephalomalacia on CT. Methods We analyzed 80 head CTs with ballistic PHI from the Institutional Review Board approved Vietnam head injury registry. Anatomic reports were generated from spatial coordinates of projectile entrance and terminal fragment location. These were compared to manual tracings of the regions of encephalomalacia. Dice’s similarity coefficients, kappa, sensitivities, and specificities were calculated to assess agreement. Times required for case analysis were also compared. Results Results show high specificity of anatomic regions identified on CT with semiautomated anatomical estimates and manual tracings of tissue damage. Radiologist’s and medical students’ anatomic region reports were similar (Kappa 0.8, t-test p < 0.001). Region of probable injury modeling of involved brain structures was sensitive (0.7) and specific (0.9) compared with manually traced structures. Semiautomated analysis was 9-fold faster than manual tracings. Conclusion Our region of probable injury spatial model approximates anatomical regions of encephalomalacia from ballistic PHI with time-saving over manual methods. Results show potential for automated anatomical reporting as an adjunct to current practice of radiologist/neurosurgical review of brain injury by penetrating projectiles. PMID:23707123
Social media based NPL system to find and retrieve ARM data: Concept paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, G.C.; French, M.A.; Monteverde, D.H.
1993-03-01
An automated method has been developed for representing outcrop data on geologic structures on maps. Using a MS-DOS custom database management system in conjunction with the ARC/INFO Geographic Information System (GIS), trends of geologic structures are plotted with user-specific symbols. The length of structural symbols can be frequency-weighted based on collective values from structural domains. The PC-based data manager is the NJGS Field data Management System (FMS) Version 2.0 which includes sort, output, and analysis functions for structural data input in either azimuth or quadrant form. Program options include lineament sorting, data output to other data management and analysis software,more » and a circular histogram (rose diagram) routine for trend frequency analysis. Trends can be displayed with either half-or full-rose diagrams using either 10[degree] sectors or one degree spikes for strike, trend, or dip azimuth readings. Scalar and vector statistics are both included. For the mesostructural analysis, ASCII files containing the station number, structural trend and inclination, and plot-symbol-length value are downloaded from FMS and uploaded into an ARC/INFO macro which sequentially plots the information. Plots can be generated in conjunction with any complimentary GIS coverage for various types of spatial analyses. Mesostructural plots can be used for regional tectonic analyses, for hydrogeologic analysis of fractured bedrock aquifers, or for ground-truthing data from fracture-trace or lineament analyses.« less
Achieving realistic performance and decison-making capabilities in computer-generated air forces
NASA Astrophysics Data System (ADS)
Banks, Sheila B.; Stytz, Martin R.; Santos, Eugene, Jr.; Zurita, Vincent B.; Benslay, James L., Jr.
1997-07-01
For a computer-generated force (CGF) system to be useful in training environments, it must be able to operate at multiple skill levels, exhibit competency at assigned missions, and comply with current doctrine. Because of the rapid rate of change in distributed interactive simulation (DIS) and the expanding set of performance objectives for any computer- generated force, the system must also be modifiable at reasonable cost and incorporate mechanisms for learning. Therefore, CGF applications must have adaptable decision mechanisms and behaviors and perform automated incorporation of past reasoning and experience into its decision process. The CGF must also possess multiple skill levels for classes of entities, gracefully degrade its reasoning capability in response to system stress, possess an expandable modular knowledge structure, and perform adaptive mission planning. Furthermore, correctly performing individual entity behaviors is not sufficient. Issues related to complex inter-entity behavioral interactions, such as the need to maintain formation and share information, must also be considered. The CGF must also be able to acceptably respond to unforeseen circumstances and be able to make decisions in spite of uncertain information. Because of the need for increased complexity in the virtual battlespace, the CGF should exhibit complex, realistic behavior patterns within the battlespace. To achieve these necessary capabilities, an extensible software architecture, an expandable knowledge base, and an adaptable decision making mechanism are required. Our lab has addressed these issues in detail. The resulting DIS-compliant system is called the automated wingman (AW). The AW is based on fuzzy logic, the common object database (CODB) software architecture, and a hierarchical knowledge structure. We describe the techniques we used to enable us to make progress toward a CGF entity that satisfies the requirements presented above. We present our design and implementation of an adaptable decision making mechanism that uses multi-layered, fuzzy logic controlled situational analysis. Because our research indicates that fuzzy logic can perform poorly under certain circumstances, we combine fuzzy logic inferencing with adversarial game tree techniques for decision making in strategic and tactical engagements. We describe the approach we employed to achieve this fusion. We also describe the automated wingman's system architecture and knowledge base architecture.
Automated batch fiducial-less tilt-series alignment in Appion using Protomo
Noble, Alex J.; Stagg, Scott M.
2015-01-01
The field of electron tomography has benefited greatly from manual and semi-automated approaches to marker-based tilt-series alignment that have allowed for the structural determination of multitudes of in situ cellular structures as well as macromolecular structures of individual protein complexes. The emergence of complementary metal-oxide semiconductor detectors capable of detecting individual electrons has enabled the collection of low dose, high contrast images, opening the door for reliable correlation-based tilt-series alignment. Here we present a set of automated, correlation-based tilt-series alignment, contrast transfer function (CTF) correction, and reconstruction workflows for use in conjunction with the Appion/Leginon package that are primarily targeted at automating structure determination with cryogenic electron microscopy. PMID:26455557
Two-Graph Building Interior Representation for Emergency Response Applications
NASA Astrophysics Data System (ADS)
Boguslawski, P.; Mahdjoubi, L.; Zverovich, V.; Fadli, F.
2016-06-01
Nowadays, in a rapidly developing urban environment with bigger and higher public buildings, disasters causing emergency situations and casualties are unavoidable. Preparedness and quick response are crucial issues saving human lives. Available information about an emergency scene, such as a building structure, helps for decision making and organizing rescue operations. Models supporting decision-making should be available in real, or near-real, time. Thus, good quality models that allow implementation of automated methods are highly desirable. This paper presents details of the recently developed method for automated generation of variable density navigable networks in a 3D indoor environment, including a full 3D topological model, which may be used not only for standard navigation but also for finding safe routes and simulating hazard and phenomena associated with disasters such as fire spread and heat transfer.
SU-C-BRB-01: Automated Dose Deformation for Re-Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, S; Kainz, K; Li, X
Purpose: An objective of retreatment planning is to minimize dose to previously irradiated tissues. Conventional retreatment planning is based largely on best-guess superposition of the previous treatment’s isodose lines. In this study, we report a rigorous, automated retreatment planning process to minimize dose to previously irradiated organs at risk (OAR). Methods: Data for representative patients previously treated using helical tomotherapy and later retreated in the vicinity of the original disease site were retrospectively analyzed in an automated fashion using a prototype treatment planning system equipped with a retreatment planning module (Accuray, Inc.). The initial plan’s CT, structures, and planned dosemore » were input along with the retreatment CT and structure set. Using a deformable registration algorithm implemented in the module, the initially planned dose and structures were warped onto the retreatment CT. An integrated third-party sourced software (MIM, Inc.) was used to evaluate registration quality and to contour overlapping regions between isodose lines and OARs, providing additional constraints during retreatment planning. The resulting plan and the conventionally generated retreatment plan were compared. Results: Jacobian maps showed good quality registration between the initial plan and retreatment CTs. For a right orbit case, the dose deformation facilitated delineating the regions of the eyes and optic chiasm originally receiving 13 to 42 Gy. Using these regions as dose constraints, the new retreatment plan resulted in V50 reduction of 28% for the right eye and 8% for the optic chiasm, relative to the conventional plan. Meanwhile, differences in the PTV dose coverage were clinically insignificant. Conclusion: Automated retreatment planning with dose deformation and definition of previously-irradiated regions allowed for additional planning constraints to be defined to minimize re-irradiation of OARs. For serial organs that do not recover from radiation damage, this method provides a more precise and quantitative means to limit cumulative dose. This research is partially supported by Accuray, Inc.« less
NASA Astrophysics Data System (ADS)
Görgl, Richard; Brandstätter, Elmar
2017-01-01
The article presents an overview of what is possible nowadays in the field of laser materials processing. The state of the art in the complete process chain is shown, starting with the generation of a specific components CAD data and continuing with the automated motion path generation for the laser head carried by a CNC or robot system. Application examples from laser cladding and laser-based additive manufacturing are given.
Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto
2012-05-25
In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chernoguzov, Alexander; Markham, Thomas R.; Haridas, Harshal S.
A method includes generating at least one access vector associated with a specified device in an industrial process control and automation system. The specified device has one of multiple device roles. The at least one access vector is generated based on one or more communication policies defining communications between one or more pairs of devices roles in the industrial process control and automation system, where each pair of device roles includes the device role of the specified device. The method also includes providing the at least one access vector to at least one of the specified device and one ormore » more other devices in the industrial process control and automation system in order to control communications to or from the specified device.« less
An industrial engineering approach to laboratory automation for high throughput screening
Menke, Karl C.
2000-01-01
Across the pharmaceutical industry, there are a variety of approaches to laboratory automation for high throughput screening. At Sphinx Pharmaceuticals, the principles of industrial engineering have been applied to systematically identify and develop those automated solutions that provide the greatest value to the scientists engaged in lead generation. PMID:18924701
The Automation Inventory of Research Libraries, 1986.
ERIC Educational Resources Information Center
Sitts, Maxine K., Ed.
Based on information and data from 113 Association of Research Libraries (ARL) members that were gathered and updated between March and August 1986, this publication was generated from a database developed by ARL to provide timely, comparable information about the extent and nature of automation within the ARL community. Trends in automation are…
Automated Test-Form Generation
ERIC Educational Resources Information Center
van der Linden, Wim J.; Diao, Qi
2011-01-01
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Photonomics: automation approaches yield economic aikido for photonics device manufacture
NASA Astrophysics Data System (ADS)
Jordan, Scott
2002-09-01
In the glory days of photonics, with exponentiating demand for photonics devices came exponentiating competition, with new ventures commencing deliveries seemingly weekly. Suddenly the industry was faced with a commodity marketplace well before a commodity cost structure was in place. Economic issues like cost, scalability, yield-call it all "Photonomics" -now drive the industry. Automation and throughput-optimization are obvious answers, but until now, suitable modular tools had not been introduced. Available solutions were barely compatible with typical transverse alignment tolerances and could not automate angular alignments of collimated devices and arrays. And settling physics served as the insoluble bottleneck to throughput and resolution advancement in packaging, characterization and fabrication processes. The industry has addressed these needs in several ways, ranging from special configurations of catalog motion devices to integrated microrobots based on a novel mini-hexapod configuration. This intriguing approach allows tip/tilt alignments to be automated about any point in space, such as a beam waist, a focal point, the cleaved face of a fiber, or the optical axis of a waveguide- ideal for MEMS packaging automation and array alignment. Meanwhile, patented new low-cost settling-enhancement technology has been applied in applications ranging from air-bearing long-travel stages to subnanometer-resolution piezo positioners to advance resolution and process cycle-times in sensitive applications such as optical coupling characterization and fiber Bragg grating generation. Background, examples and metrics are discussed, providing an up-to-date industry overview of available solutions.
NASA Technical Reports Server (NTRS)
Khambatta, Cyrus F.
2007-01-01
A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.
Producing genome structure populations with the dynamic and automated PGS software.
Hua, Nan; Tjong, Harianto; Shin, Hanjun; Gong, Ke; Zhou, Xianghong Jasmine; Alber, Frank
2018-05-01
Chromosome conformation capture technologies such as Hi-C are widely used to investigate the spatial organization of genomes. Because genome structures can vary considerably between individual cells of a population, interpreting ensemble-averaged Hi-C data can be challenging, in particular for long-range and interchromosomal interactions. We pioneered a probabilistic approach for the generation of a population of distinct diploid 3D genome structures consistent with all the chromatin-chromatin interaction probabilities from Hi-C experiments. Each structure in the population is a physical model of the genome in 3D. Analysis of these models yields new insights into the causes and the functional properties of the genome's organization in space and time. We provide a user-friendly software package, called PGS, which runs on local machines (for practice runs) and high-performance computing platforms. PGS takes a genome-wide Hi-C contact frequency matrix, along with information about genome segmentation, and produces an ensemble of 3D genome structures entirely consistent with the input. The software automatically generates an analysis report, and provides tools to extract and analyze the 3D coordinates of specific domains. Basic Linux command-line knowledge is sufficient for using this software. A typical running time of the pipeline is ∼3 d with 300 cores on a computer cluster to generate a population of 1,000 diploid genome structures at topological-associated domain (TAD)-level resolution.
Yoshioka, Craig; Pulokas, James; Fellmann, Denis; Potter, Clinton S.; Milligan, Ronald A.; Carragher, Bridget
2007-01-01
Visualization by electron microscopy has provided many insights into the composition, quaternary structure, and mechanism of macromolecular assemblies. By preserving samples in stain or vitreous ice it is possible to image them as discrete particles, and from these images generate three-dimensional structures. This ‘single-particle’ approach suffers from two major shortcomings; it requires an initial model to reconstitute 2D data into a 3D volume, and it often fails when faced with conformational variability. Random conical tilt (RCT) and orthogonal tilt (OTR) are methods developed to overcome these problems, but the data collection required, particularly for vitreous ice specimens, is difficult and tedious. In this paper we present an automated approach to RCT/OTR data collection that removes the burden of manual collection and offers higher quality and throughput than is otherwise possible. We show example datasets collected under stain and cryo conditions and provide statistics related to the efficiency and robustness of the process. Furthermore, we describe the new algorithms that make this method possible, which include new calibrations, improved targeting and feature-based tracking. PMID:17524663
Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images
NASA Astrophysics Data System (ADS)
Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel
2016-02-01
Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.
Automated Content Detection for Cassini Images
NASA Astrophysics Data System (ADS)
Stanboli, A.; Bue, B.; Wagstaff, K.; Altinok, A.
2017-06-01
NASA missions generate numerous images ever organized in increasingly large archives. Image archives are currently not searchable by image content. We present an automated content detection prototype that can enable content search.
Automated batch fiducial-less tilt-series alignment in Appion using Protomo.
Noble, Alex J; Stagg, Scott M
2015-11-01
The field of electron tomography has benefited greatly from manual and semi-automated approaches to marker-based tilt-series alignment that have allowed for the structural determination of multitudes of in situ cellular structures as well as macromolecular structures of individual protein complexes. The emergence of complementary metal-oxide semiconductor detectors capable of detecting individual electrons has enabled the collection of low dose, high contrast images, opening the door for reliable correlation-based tilt-series alignment. Here we present a set of automated, correlation-based tilt-series alignment, contrast transfer function (CTF) correction, and reconstruction workflows for use in conjunction with the Appion/Leginon package that are primarily targeted at automating structure determination with cryogenic electron microscopy. Copyright © 2015 Elsevier Inc. All rights reserved.
Automated structure solution, density modification and model building.
Terwilliger, Thomas C
2002-11-01
The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.
Statechart Analysis with Symbolic PathFinder
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.
2012-01-01
We report here on our on-going work that addresses the automated analysis and test case generation for software systems modeled using multiple Statechart formalisms. The work is motivated by large programs such as NASA Exploration, that involve multiple systems that interact via safety-critical protocols and are designed with different Statechart variants. To verify these safety-critical systems, we have developed Polyglot, a framework for modeling and analysis of model-based software written using different Statechart formalisms. Polyglot uses a common intermediate representation with customizable Statechart semantics and leverages the analysis and test generation capabilities of the Symbolic PathFinder tool. Polyglot is used as follows: First, the structure of the Statechart model (expressed in Matlab Stateflow or Rational Rhapsody) is translated into a common intermediate representation (IR). The IR is then translated into Java code that represents the structure of the model. The semantics are provided as "pluggable" modules.
NASA Astrophysics Data System (ADS)
Agn, Mikael; Law, Ian; Munck af Rosenschöld, Per; Van Leemput, Koen
2016-03-01
We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstem and hippocampus. The preliminary results demonstrate the feasibility of the method.
Meshing of a Spiral Bevel Gearset with 3D Finite Element Analysis
NASA Technical Reports Server (NTRS)
Bibel, George D.; Handschuh, Robert
1996-01-01
Recent advances in spiral bevel gear geometry and finite element technology make it practical to conduct a structural analysis and analytically roll the gearset through mesh. With the advent of user specific programming linked to 3D solid modelers and mesh generators, model generation has become greatly automated. Contact algorithms available in general purpose finite element codes eliminate the need for the use and alignment of gap elements. Once the gearset is placed in mesh, user subroutines attached to the FE code easily roll the gearset through mesh. The method is described in detail. Preliminary results for a gearset segment showing the progression of the contact lineload is given as the gears roll through mesh.
Numerical grid generation in computational field simulations. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, B.K.; Thompson, J.F.; Haeuser, J.
1996-12-31
To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplinesmore » including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.« less
NASA Astrophysics Data System (ADS)
Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang
2005-04-01
Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.
Towards an automated intelligence product generation capability
NASA Astrophysics Data System (ADS)
Smith, Alison M.; Hawes, Timothy W.; Nolan, James J.
2015-05-01
Creating intelligence information products is a time consuming and difficult process for analysts faced with identifying key pieces of information relevant to a complex set of information requirements. Complicating matters, these key pieces of information exist in multiple modalities scattered across data stores, buried in huge volumes of data. This results in the current predicament analysts find themselves; information retrieval and management consumes huge amounts of time that could be better spent performing analysis. The persistent growth in data accumulation rates will only increase the amount of time spent on these tasks without a significant advance in automated solutions for information product generation. We present a product generation tool, Automated PrOduct Generation and Enrichment (APOGEE), which aims to automate the information product creation process in order to shift the bulk of the analysts' effort from data discovery and management to analysis. APOGEE discovers relevant text, imagery, video, and audio for inclusion in information products using semantic and statistical models of unstructured content. APOGEEs mixed-initiative interface, supported by highly responsive backend mechanisms, allows analysts to dynamically control the product generation process ensuring a maximally relevant result. The combination of these capabilities results in significant reductions in the time it takes analysts to produce information products while helping to increase the overall coverage. Through evaluation with a domain expert, APOGEE has been shown the potential to cut down the time for product generation by 20x. The result is a flexible end-to-end system that can be rapidly deployed in new operational settings.
Overcoming Barriers to Technology Adoption in Small Manufacturing Enterprises (SMEs)
2003-06-01
automates quote-generation, order - processing workflow management, perform- ance analysis, and accounting functions. Ultimately, it will enable Magdic...that Magdic imple- ment an MES instead. The MES, in addition to solving the problem of document manage- ment, would automate quote-generation, order ... processing , workflow management, perform- ance analysis, and accounting functions. To help Magdic personnel learn about the MES, TIDE personnel provided
Automated Report Generation for Research Data Repositories: From i2b2 to PDF.
Thiemann, Volker S; Xu, Tingyan; Röhrig, Rainer; Majeed, Raphael W
2017-01-01
We developed an automated toolchain to generate reports of i2b2 data. It is based on free open source software and runs on a Java Application Server. It is sucessfully used in an ED registry project. The solution is highly configurable and portable to other projects based on i2b2 or compatible factual data sources.
Integrating Test-Form Formatting into Automated Test Assembly
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2013-01-01
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Using Automated Scores of Student Essays to Support Teacher Guidance in Classroom Inquiry
ERIC Educational Resources Information Center
Gerard, Libby F.; Linn, Marcia C.
2016-01-01
Computer scoring of student written essays about an inquiry topic can be used to diagnose student progress both to alert teachers to struggling students and to generate automated guidance. We identify promising ways for teachers to add value to automated guidance to improve student learning. Three teachers from two schools and their 386 students…
A Geometry Based Infra-structure for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.
Domain specific software architectures: Command and control
NASA Technical Reports Server (NTRS)
Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave
1992-01-01
GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.
Dokarry, Melissa; Laurendon, Caroline; O'Maille, Paul E
2012-01-01
Structure-based combinatorial protein engineering (SCOPE) is a homology-independent recombination method to create multiple crossover gene libraries by assembling defined combinations of structural elements ranging from single mutations to domains of protein structure. SCOPE was originally inspired by DNA shuffling, which mimics recombination during meiosis, where mutations from parental genes are "shuffled" to create novel combinations in the resulting progeny. DNA shuffling utilizes sequence identity between parental genes to mediate template-switching events (the annealing and extension of one parental gene fragment on another) in PCR reassembly reactions to generate crossovers and hence recombination between parental genes. In light of the conservation of protein structure and degeneracy of sequence, SCOPE was developed to enable the "shuffling" of distantly related genes with no requirement for sequence identity. The central principle involves the use of oligonucleotides to encode for crossover regions to choreograph template-switching events during PCR assembly of gene fragments to create chimeric genes. This approach was initially developed to create libraries of hybrid DNA polymerases from distantly related parents, and later developed to create a combinatorial mutant library of sesquiterpene synthases to explore the catalytic landscapes underlying the functional divergence of related enzymes. This chapter presents a simplified protocol of SCOPE that can be integrated with different mutagenesis techniques and is suitable for automation by liquid-handling robots. Two examples are presented to illustrate the application of SCOPE to create gene libraries using plant sesquiterpene synthases as the model system. In the first example, we outline how to create an active-site library as a series of complex mixtures of diverse mutants. In the second example, we outline how to create a focused library as an array of individual clones to distil minimal combinations of functionally important mutations. Through these examples, the principles of the technique are illustrated and the suitability of automating various aspects of the procedure for given applications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-10-01
The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.
Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-01-01
Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070
Experiments with Test Case Generation and Runtime Analysis
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Drusinsky, Doron; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareanu, Corina; Rosu, Grigore; Visser, Willem; Koga, Dennis (Technical Monitor)
2003-01-01
Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.
NASA Technical Reports Server (NTRS)
Doggett, William R.
1992-01-01
The topics are presented in viewgraph form and include: automated structures assembly facility current control hierarchy; automated structures assembly facility purposed control hierarchy; end-effector software state transition diagram; block diagram for ideal install composite; and conclusions.
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
Giger, Maryellen L.; Chen, Chin-Tu; Armato, Samuel; Doi, Kunio
1999-10-26
A method and system for the computerized registration of radionuclide images with radiographic images, including generating image data from radiographic and radionuclide images of the thorax. Techniques include contouring the lung regions in each type of chest image, scaling and registration of the contours based on location of lung apices, and superimposition after appropriate shifting of the images. Specific applications are given for the automated registration of radionuclide lungs scans with chest radiographs. The method in the example given yields a system that spatially registers and correlates digitized chest radiographs with V/Q scans in order to correlate V/Q functional information with the greater structural detail of chest radiographs. Final output could be the computer-determined contours from each type of image superimposed on any of the original images, or superimposition of the radionuclide image data, which contains high activity, onto the radiographic chest image.
Simulation based optimization on automated fibre placement process
NASA Astrophysics Data System (ADS)
Lei, Shi
2018-02-01
In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.
Possibilities for serial femtosecond crystallography sample delivery at future light sourcesa)
Chavas, L. M. G.; Gumprecht, L.; Chapman, H. N.
2015-01-01
Serial femtosecond crystallography (SFX) uses X-ray pulses from free-electron laser (FEL) sources that can outrun radiation damage and thereby overcome long-standing limits in the structure determination of macromolecular crystals. Intense X-ray FEL pulses of sufficiently short duration allow the collection of damage-free data at room temperature and give the opportunity to study irreversible time-resolved events. SFX may open the way to determine the structure of biological molecules that fail to crystallize readily into large well-diffracting crystals. Taking advantage of FELs with high pulse repetition rates could lead to short measurement times of just minutes. Automated delivery of sample suspensions for SFX experiments could potentially give rise to a much higher rate of obtaining complete measurements than at today's third generation synchrotron radiation facilities, as no crystal alignment or complex robotic motions are required. This capability will also open up extensive time-resolved structural studies. New challenges arise from the resulting high rate of data collection, and in providing reliable sample delivery. Various developments for fully automated high-throughput SFX experiments are being considered for evaluation, including new implementations for a reliable yet flexible sample environment setup. Here, we review the different methods developed so far that best achieve sample delivery for X-ray FEL experiments and present some considerations towards the goal of high-throughput structure determination with X-ray FELs. PMID:26798808
A “loop” shape descriptor and its application to automated segmentation of airways from CT scans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu, Jiantao; Jin, Chenwang, E-mail: jcw76@163.com; Yu, Nan
2015-06-15
Purpose: A novel shape descriptor is presented to aid an automated identification of the airways depicted on computed tomography (CT) images. Methods: Instead of simplifying the tubular characteristic of the airways as an ideal mathematical cylindrical or circular shape, the proposed “loop” shape descriptor exploits the fact that the cross sections of any tubular structure (regardless of its regularity) always appear as a loop. In implementation, the authors first reconstruct the anatomical structures in volumetric CT as a three-dimensional surface model using the classical marching cubes algorithm. Then, the loop descriptor is applied to locate the airways with a concavemore » loop cross section. To deal with the variation of the airway walls in density as depicted on CT images, a multiple threshold strategy is proposed. A publicly available chest CT database consisting of 20 CT scans, which was designed specifically for evaluating an airway segmentation algorithm, was used for quantitative performance assessment. Measures, including length, branch count, and generations, were computed under the aid of a skeletonization operation. Results: For the test dataset, the airway length ranged from 64.6 to 429.8 cm, the generation ranged from 7 to 11, and the branch number ranged from 48 to 312. These results were comparable to the performance of the state-of-the-art algorithms validated on the same dataset. Conclusions: The authors’ quantitative experiment demonstrated the feasibility and reliability of the developed shape descriptor in identifying lung airways.« less
Automation of steam generator services at public service electric & gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cruickshank, H.; Wray, J.; Scull, D.
1995-03-01
Public Service Electric & Gas takes an aggressive approach to pursuing new exposure reduction techniques. Evaluation of historic outage exposure shows that over the last eight refueling outages, primary steam generator work has averaged sixty-six (66) person-rem, or, approximately tewenty-five percent (25%) of the general outage exposure at Salem Station. This maintenance evolution represents the largest percentage of exposure for any single activity. Because of this, primary steam generator work represents an excellent opportunity for the development of significant exposure reduction techniques. A study of primary steam generator maintenance activities demonstrated that seventy-five percent (75%) of radiation exposure was duemore » to work activities of the primary steam generator platform, and that development of automated methods for performing these activities was worth pursuing. Existing robotics systems were examined and it was found that a new approach would have to be developed. This resulted in a joint research and development project between Westinghouse and Public Service Electric & Gas to develop an automated system of accomplishing the Health Physics functions on the primary steam generator platform. R.O.M.M.R.S. (Remotely Operated Managed Maintenance Robotics System) was the result of this venture.« less
An automated method for modeling proteins on known templates using distance geometry.
Srinivasan, S; March, C J; Sudarsanam, S
1993-02-01
We present an automated method incorporated into a software package, FOLDER, to fold a protein sequence on a given three-dimensional (3D) template. Starting with the sequence alignment of a family of homologous proteins, tertiary structures are modeled using the known 3D structure of one member of the family as a template. Homologous interatomic distances from the template are used as constraints. For nonhomologous regions in the model protein, the lower and the upper bounds for the interatomic distances are imposed by steric constraints and the globular dimensions of the template, respectively. Distance geometry is used to embed an ensemble of structures consistent with these distance bounds. Structures are selected from this ensemble based on minimal distance error criteria, after a penalty function optimization step. These structures are then refined using energy optimization methods. The method is tested by simulating the alpha-chain of horse hemoglobin using the alpha-chain of human hemoglobin as the template and by comparing the generated models with the crystal structure of the alpha-chain of horse hemoglobin. We also test the packing efficiency of this method by reconstructing the atomic positions of the interior side chains beyond C beta atoms of a protein domain from a known 3D structure. In both test cases, models retain the template constraints and any additionally imposed constraints while the packing of the interior residues is optimized with no short contacts or bond deformations. To demonstrate the use of this method in simulating structures of proteins with nonhomologous disulfides, we construct a model of murine interleukin (IL)-4 using the NMR structure of human IL-4 as the template. The resulting geometry of the nonhomologous disulfide in the model structure for murine IL-4 is consistent with standard disulfide geometry.
Automated aerial image based CD metrology initiated by pattern marking with photomask layout data
NASA Astrophysics Data System (ADS)
Davis, Grant; Choi, Sun Young; Jung, Eui Hee; Seyfarth, Arne; van Doornmalen, Hans; Poortinga, Eric
2007-05-01
The photomask is a critical element in the lithographic image transfer process from the drawn layout to the final structures on the wafer. The non-linearity of the imaging process and the related MEEF impose a tight control requirement on the photomask critical dimensions. Critical dimensions can be measured in aerial images with hardware emulation. This is a more recent complement to the standard scanning electron microscope measurement of wafers and photomasks. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging that cannot be observed directly by SEM measurement of the mask. Aerial image measurement excludes the processing effects of printing and etching on the wafer. This presents a unique contribution to the difficult process control and modeling tasks in mask making. In the past, aerial image measurements have been used mainly to characterize the printability of mask repair sites. Development of photomask CD characterization with the AIMS TM tool was motivated by the benefit of MEEF sensitivity and the shorter feedback loop compared to wafer exposures. This paper describes a new application that includes: an improved interface for the selection of meaningful locations using the photomask and design layout data with the Calibre TM Metrology Interface, an automated recipe generation process, an automated measurement process, and automated analysis and result reporting on a Carl Zeiss AIMS TM system.
Automated recycling of chemistry for virtual screening and library design.
Vainio, Mikko J; Kogej, Thierry; Raubacher, Florian
2012-07-23
An early stage drug discovery project needs to identify a number of chemically diverse and attractive compounds. These hit compounds are typically found through high-throughput screening campaigns. The diversity of the chemical libraries used in screening is therefore important. In this study, we describe a virtual high-throughput screening system called Virtual Library. The system automatically "recycles" validated synthetic protocols and available starting materials to generate a large number of virtual compound libraries, and allows for fast searches in the generated libraries using a 2D fingerprint based screening method. Virtual Library links the returned virtual hit compounds back to experimental protocols to quickly assess the synthetic accessibility of the hits. The system can be used as an idea generator for library design to enrich the screening collection and to explore the structure-activity landscape around a specific active compound.
Roussis, S G
2001-08-01
The automated acquisition of the product ion spectra of all precursor ions in a selected mass range by using a magnetic sector/orthogonal acceleration time-of-flight (oa-TOF) tandem mass spectrometer for the characterization of complex petroleum mixtures is reported. Product ion spectra are obtained by rapid oa-TOF data acquisition and simultaneous scanning of the magnet. An analog signal generator is used for the scanning of the magnet. Slow magnet scanning rates permit the accurate profiling of precursor ion peaks and the acquisition of product ion spectra for all isobaric ion species. The ability of the instrument to perform both high- and low-energy collisional activation experiments provides access to a large number of dissociation pathways useful for the characterization of precursor ions. Examples are given that illustrate the capability of the method for the characterization of representative petroleum mixtures. The structural information obtained by the automated MS/MS experiment is used in combination with high-resolution accurate mass measurement results to characterize unknown components in a polar extract of a refinery product. The exhaustive mapping of all precursor ions in representative naphtha and middle-distillate fractions is presented. Sets of isobaric ion species are separated and their structures are identified by interpretation from first principles or by comparison with standard 70-eV EI libraries of spectra. The utility of the method increases with the complexity of the samples.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Özdemir, Vural; Hekim, Nezih
2018-01-01
Driverless cars with artificial intelligence (AI) and automated supermarkets run by collaborative robots (cobots) working without human supervision have sparked off new debates: what will be the impacts of extreme automation, turbocharged by the Internet of Things (IoT), AI, and the Industry 4.0, on Big Data and omics implementation science? The IoT builds on (1) broadband wireless internet connectivity, (2) miniaturized sensors embedded in animate and inanimate objects ranging from the house cat to the milk carton in your smart fridge, and (3) AI and cobots making sense of Big Data collected by sensors. Industry 4.0 is a high-tech strategy for manufacturing automation that employs the IoT, thus creating the Smart Factory. Extreme automation until "everything is connected to everything else" poses, however, vulnerabilities that have been little considered to date. First, highly integrated systems are vulnerable to systemic risks such as total network collapse in the event of failure of one of its parts, for example, by hacking or Internet viruses that can fully invade integrated systems. Second, extreme connectivity creates new social and political power structures. If left unchecked, they might lead to authoritarian governance by one person in total control of network power, directly or through her/his connected surrogates. We propose Industry 5.0 that can democratize knowledge coproduction from Big Data, building on the new concept of symmetrical innovation. Industry 5.0 utilizes IoT, but differs from predecessor automation systems by having three-dimensional (3D) symmetry in innovation ecosystem design: (1) a built-in safe exit strategy in case of demise of hyperconnected entrenched digital knowledge networks. Importantly, such safe exists are orthogonal-in that they allow "digital detox" by employing pathways unrelated/unaffected by automated networks, for example, electronic patient records versus material/article trails on vital medical information; (2) equal emphasis on both acceleration and deceleration of innovation if diminishing returns become apparent; and (3) next generation social science and humanities (SSH) research for global governance of emerging technologies: "Post-ELSI Technology Evaluation Research" (PETER). Importantly, PETER considers the technology opportunity costs, ethics, ethics-of-ethics, framings (epistemology), independence, and reflexivity of SSH research in technology policymaking. Industry 5.0 is poised to harness extreme automation and Big Data with safety, innovative technology policy, and responsible implementation science, enabled by 3D symmetry in innovation ecosystem design.
Vorberg, Ellen; Fleischer, Heidi; Junginger, Steffen; Liu, Hui; Stoll, Norbert; Thurow, Kerstin
2016-10-01
Life science areas require specific sample pretreatment to increase the concentration of the analytes and/or to convert the analytes into an appropriate form for the detection and separation systems. Various workstations are commercially available, allowing for automated biological sample pretreatment. Nevertheless, due to the required temperature, pressure, and volume conditions in typical element and structure-specific measurements, automated platforms are not suitable for analytical processes. Thus, the purpose of the presented investigation was the design, realization, and evaluation of an automated system ensuring high-precision sample preparation for a variety of analytical measurements. The developed system has to enable system adaption and high performance flexibility. Furthermore, the system has to be capable of dealing with the wide range of required vessels simultaneously, allowing for less cost and time-consuming process steps. However, the system's functionality has been confirmed in various validation sequences. Using element-specific measurements, the automated system was up to 25% more precise compared to the manual procedure and as precise as the manual procedure using structure-specific measurements. © 2015 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Bouquerel, Laure; Moulin, Nicolas; Drapier, Sylvain; Boisse, Philippe; Beraud, Jean-Marc
2017-10-01
While weight has been so far the main driver for the development of prepreg based-composites solutions for aeronautics, a new weight-cost trade-off tends to drive choices for next-generation aircrafts. As a response, Hexcel has designed a new dry reinforcement type for aircraft primary structures, which combines the benefits of automation, out-of-autoclave process cost-effectiveness, and mechanical performances competitive to prepreg solutions: HiTape® is a unidirectional (UD) dry carbon reinforcement with thermoplastic veil on each side designed for aircraft primary structures [1-3]. One privileged process route for HiTape® in high volume automated processes consists in forming initially flat dry reinforcement stacks, before resin infusion [4] or injection. Simulation of the forming step aims at predicting the geometry and mechanical properties of the formed stack (so-called preform) for process optimisation. Extensive work has been carried out on prepreg and dry woven fabrics forming behaviour and simulation, but the interest for dry non-woven reinforcements has emerged more recently. Some work has been achieved on non crimp fabrics but studies on the forming behaviour of UDs are seldom and deal with UD prepregs only. Tension and bending in the fibre direction, along with inter-ply friction have been identified as the main mechanisms controlling the HiTape® response during forming. Bending has been characterised using a modified Peirce's flexometer [5] and inter-ply friction study is under development. Anisotropic hyperelastic constitutive models have been selected to represent the assumed decoupled deformation mechanisms. Model parameters are then identified from associated experimental results. For forming simulation, a continuous approach at the macroscopic scale has been selected first, and simulation is carried out in the Zset framework [6] using proper shell finite elements.
High-throughput Crystallography for Structural Genomics
Joachimiak, Andrzej
2009-01-01
Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now over 53,000 proteins structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact. PMID:19765976
Cassini-Huygens maneuver automation for navigation
NASA Technical Reports Server (NTRS)
Goodson, Troy; Attiyah, Amy; Buffington, Brent; Hahn, Yungsun; Pojman, Joan; Stavert, Bob; Strange, Nathan; Stumpf, Paul; Wagner, Sean; Wolff, Peter;
2006-01-01
Many times during the Cassini-Huygens mission to Saturn, propulsive maneuvers must be spaced so closely together that there isn't enough time or workforce to execute the maneuver-related software manually, one subsystem at a time. Automation is required. Automating the maneuver design process has involved close cooperation between teams. We present the contribution from the Navigation system. In scope, this includes trajectory propagation and search, generation of ephemerides, general tasks such as email notification and file transfer, and presentation materials. The software has been used to help understand maneuver optimization results, Huygens probe delivery statistics, and Saturn ring-plane crossing geometry. The Maneuver Automation Software (MAS), developed for the Cassini-Huygens program enables frequent maneuvers by handling mundane tasks such as creation of deliverable files, file delivery, generation and transmission of email announcements, generation of presentation material and other supporting documentation. By hand, these tasks took up hours, if not days, of work for each maneuver. Automated, these tasks may be completed in under an hour. During the cruise trajectory the spacing of maneuvers was such that development of a maneuver design could span about a month, involving several other processes in addition to that described, above. Often, about the last five days of this process covered the generation of a final design using an updated orbit-determination estimate. To support the tour trajectory, the orbit determination data cut-off of five days before the maneuver needed to be reduced to approximately one day and the whole maneuver development process needed to be reduced to less than a week..
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-10
... interface with AUTOM via an Exchange approved proprietary electronic quoting device in eligible options to... to generate and submit option quotations electronically through AUTOM in eligible options to which...
Xenon International Automated Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-08-05
The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.
Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald
2017-01-01
ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.
Bhattacharyya, S; Fan, L; Vo, L; Labadie, J
2000-04-01
Amine libraries and their derivatives are important targets for high throughput synthesis because of their versatility as medicinal agents and agrochemicals. As a part of our efforts towards automated chemical library synthesis, a titanium(IV) isopropoxide mediated solution phase reductive amination protocol was successfully translated to automation on the Trident(TM) library synthesizer of Argonaut Technologies. An array of 24 secondary amines was prepared in high yield and purity from 4 primary amines and 6 carbonyl compounds. These secondary amines were further utilized in a split synthesis to generate libraries of ureas, amides and sulfonamides in solution phase on the Trident(TM). The automated runs included 192 reactions to synthesize 96 ureas in duplicate and 96 reactions to synthesize 48 amides and 48 sulfonamides. A number of polymer-assisted solution phase protocols were employed for parallel work-up and purification of the products in each step.
Madhavan, Poornima; Wiegmann, Douglas A
2005-01-01
Automation users often disagree with diagnostic aids that are imperfectly reliable. The extent to which users' agreements with an aid are anchored to their personal, self-generated diagnoses was explored. Participants (N = 75) performed 200 trials in which they diagnosed pump failures using an imperfectly reliable automated aid. One group (nonforced anchor, n = 50) provided diagnoses only after consulting the aid. Another group (forced anchor, n = 25) provided diagnoses both before and after receiving feedback from the aid. Within the nonforced anchor group, participants' self-reported tendency to prediagnose system failures significantly predicted their tendency to disagree with the aid, revealing a cognitive anchoring effect. Agreement rates of participants in the forced anchor group indicated that public commitment to a diagnosis did not strengthen this effect. Potential applications include the development of methods for reducing cognitive anchoring effects and improving automation utilization in high-risk domains.
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
2016-05-26
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily
2009-01-01
Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.
Space station automation of common module power management and distribution
NASA Technical Reports Server (NTRS)
Miller, W.; Jones, E.; Ashworth, B.; Riedesel, J.; Myers, C.; Freeman, K.; Steele, D.; Palmer, R.; Walsh, R.; Gohring, J.
1989-01-01
The purpose is to automate a breadboard level Power Management and Distribution (PMAD) system which possesses many functional characteristics of a specified Space Station power system. The automation system was built upon 20 kHz ac source with redundancy of the power buses. There are two power distribution control units which furnish power to six load centers which in turn enable load circuits based upon a system generated schedule. The progress in building this specified autonomous system is described. Automation of Space Station Module PMAD was accomplished by segmenting the complete task in the following four independent tasks: (1) develop a detailed approach for PMAD automation; (2) define the software and hardware elements of automation; (3) develop the automation system for the PMAD breadboard; and (4) select an appropriate host processing environment.
Generic and Automated Data Evaluation in Analytical Measurement.
Adam, Martin; Fleischer, Heidi; Thurow, Kerstin
2017-04-01
In the past year, automation has become more and more important in the field of elemental and structural chemical analysis to reduce the high degree of manual operation and processing time as well as human errors. Thus, a high number of data points are generated, which requires fast and automated data evaluation. To handle the preprocessed export data from different analytical devices with software from various vendors offering a standardized solution without any programming knowledge should be preferred. In modern laboratories, multiple users will use this software on multiple personal computers with different operating systems (e.g., Windows, Macintosh, Linux). Also, mobile devices such as smartphones and tablets have gained growing importance. The developed software, Project Analytical Data Evaluation (ADE), is implemented as a web application. To transmit the preevaluated data from the device software to the Project ADE, the exported XML report files are detected and the included data are imported into the entities database using the Data Upload software. Different calculation types of a sample within one measurement series (e.g., method validation) are identified using information tags inside the sample name. The results are presented in tables and diagrams on different information levels (general, detailed for one analyte or sample).
a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation
NASA Astrophysics Data System (ADS)
Kıvılcım, C. Ö.; Duran, Z.
2016-06-01
The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.
Skyalert: a Platform for Event Understanding and Dissemination
NASA Astrophysics Data System (ADS)
Williams, Roy; Drake, A. J.; Djorgovski, S. G.; Donalek, C.; Graham, M. J.; Mahabal, A.
2010-01-01
Skyalert.org is an event repository, web interface, and event-oriented workflow architecture that can be used in many different ways for handling astronomical events that are encoded as VOEvent. It can be used as a remote application (events in the cloud) or installed locally. Some applications are: Dissemination of events with sophisticated discrimination (trigger), using email, instant message, RSS, twitter, etc; Authoring interface for survey-generated events, follow-up observations, and other event types; event streams can be put into the skyalert.org repository, either public or private, or into a local inbstallation of Skyalert; Event-driven software components to fetch archival data, for data-mining and classification of events; human interface to events though wiki, comments, and circulars; use of the "notices and circulars" model, where machines make the notices in real time and people write the interpretation later; Building trusted, automated decisions for automated follow-up observation, and the information infrastructure for automated follow-up with DC3 and HTN telescope schedulers; Citizen science projects such as artifact detection and classification; Query capability for past events, including correlations between different streams and correlations with existing source catalogs; Event metadata structures and connection to the global registry of the virtual observatory.
NASA Astrophysics Data System (ADS)
Chęciński, Jakub; Frankowski, Marek
2016-10-01
We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.
Optimizing Decision Preparedness by Adapting Scenario Complexity and Automating Scenario Generation
NASA Technical Reports Server (NTRS)
Dunne, Rob; Schatz, Sae; Flore, Stephen M.; Nicholson, Denise
2011-01-01
Klein's recognition-primed decision (RPD) framework proposes that experts make decisions by recognizing similarities between current decision situations and previous decision experiences. Unfortunately, military personnel arQ often presented with situations that they have not experienced before. Scenario-based training (S8T) can help mitigate this gap. However, SBT remains a challenging and inefficient training approach. To address these limitations, the authors present an innovative formulation of scenario complexity that contributes to the larger research goal of developing an automated scenario generation system. This system will enable trainees to effectively advance through a variety of increasingly complex decision situations and experiences. By adapting scenario complexities and automating generation, trainees will be provided with a greater variety of appropriately calibrated training events, thus broadening their repositories of experience. Preliminary results from empirical testing (N=24) of the proof-of-concept formula are presented, and future avenues of scenario complexity research are also discussed.
NASA Technical Reports Server (NTRS)
Mueller, R. P.; Townsend, I. I.; Tamasy, G. J.; Evers, C. J.; Sibille, L. J.; Edmunson, J. E.; Fiske, M. R.; Fikes, J. C.; Case, M.
2018-01-01
The purpose of the Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) project is to incorporate the Liquid Goods Delivery System (LGDS) into the Dry Goods Delivery System (DGDS) structure to create an integrated and automated Materials Delivery System (MDS) for 3D printing structures with ordinary Portland cement (OPC) concrete. ACES 3 is a prototype for 3-D printing barracks for soldiers in forward bases, here on Earth. The LGDS supports ACES 3 by storing liquid materials, mixing recipe batches of liquid materials, and working with the Dry Goods Feed System (DGFS) previously developed for ACES 2, combining the materials that are eventually extruded out of the print nozzle. Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) is a project led by the US Army Corps of Engineers (USACE) and supported by NASA. The equivalent 3D printing system for construction in space is designated Additive Construction with Mobile Emplacement (ACME) by NASA.
1987-06-01
commercial products. · OP -- Typical cutout at a plumbiinc location where an automated monitoring system has bv :• installed. The sensor used with the...This report provides a description of commercially available sensors , instruments, and ADP equipment that may be selected to fully automate...automated. The automated plumbline monitoring system includes up to twelve sensors , repeaters, a system controller, and a printer. The system may
Experimental Evaluation of an Integrated Datalink and Automation-Based Strategic Trajectory Concept
NASA Technical Reports Server (NTRS)
Mueller, Eric
2007-01-01
This paper presents research on the interoperability of trajectory-based automation concepts and technologies with modern Flight Management Systems and datalink communication available on many of today s commercial aircraft. A tight integration of trajectory-based ground automation systems with the aircraft Flight Management System through datalink will enable mid-term and far-term benefits from trajectory-based automation methods. A two-way datalink connection between the trajectory-based automation resident in the Center/TRACON Automation System and the Future Air Navigation System-1 integrated FMS/datalink in NASA Ames B747-400 Level D simulator has been established and extensive simulation of the use of datalink messages to generate strategic trajectories completed. A strategic trajectory is defined as an aircraft deviation needed to solve a conflict or honor a route request and then merge the aircraft back to its nominal preferred trajectory using a single continuous trajectory clearance. Engineers on the ground side of the datalink generated lateral and vertical trajectory clearances and transmitted them to the Flight Management System of the 747; the airborne automation then flew the new trajectory without human intervention, requiring the flight crew only to review and to accept the trajectory. This simulation established the protocols needed for a significant majority of the trajectory change types required to solve a traffic conflict or deviate around weather. This demonstration provides a basis for understanding the requirements for integration of trajectory-based automation with current Flight Management Systems and datalink to support future National Airspace System operations.
Universal Verification Methodology Based Register Test Automation Flow.
Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu
2016-05-01
In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Jones, Thomas C.; Doggett, William R.; Roithmayr, Carlos M.; King, Bruce D.; Mikulas, Marting M.
2009-01-01
The objective of this paper is to describe and summarize the results of the development efforts for the Lunar Surface Manipulation System (LSMS) with respect to increasing the performance, operational versatility, and automation. Three primary areas of development are covered, including; the expansion of the operational envelope and versatility of the current LSMS test-bed, the design of a second generation LSMS, and the development of automation and remote control capability. The first generation LSMS, which has been designed, built, and tested both in lab and field settings, is shown to have increased range of motion and operational versatility. Features such as fork lift mode, side grappling of payloads, digging and positioning of lunar regolith, and a variety of special end effectors are described. LSMS operational viability depends on bei nagble to reposition its base from an initial position on the lander to a mobility chassis or fixed locations around the lunar outpost. Preliminary concepts are presented for the second generation LSMS design, which will perform this self-offload capability. Incorporating design improvements, the second generation will have longer reach and three times the payload capability, yet it will have approximately equivalent mass to the first generation. Lastly, this paper covers improvements being made to the control system of the LSMS test-bed, which is currently operated using joint velocity control with visual cues. These improvements include joint angle sensors, inverse kinematics, and automated controls.
NASA Technical Reports Server (NTRS)
Mixon, Randolph W.; Hankins, Walter W., III; Wise, Marion A.
1988-01-01
Research at Langley AFB concerning automated space assembly is reviewed, including a Space Shuttle experiment to test astronaut ability to assemble a repetitive truss structure, testing the use of teleoperated manipulators to construct the Assembly Concept for Construction of Erectable Space Structures I truss, and assessment of the basic characteristics of manipulator assembly operations. Other research topics include the simultaneous coordinated control of dual-arm manipulators and the automated assembly of candidate Space Station trusses. Consideration is given to the construction of an Automated Space Assembly Laboratory to study and develop the algorithms, procedures, special purpose hardware, and processes needed for automated truss assembly.
Reading PDB: perception of molecules from 3D atomic coordinates.
Urbaczek, Sascha; Kolodzik, Adrian; Groth, Inken; Heuser, Stefan; Rarey, Matthias
2013-01-28
The analysis of small molecule crystal structures is a common way to gather valuable information for drug development. The necessary structural data is usually provided in specific file formats containing only element identities and three-dimensional atomic coordinates as reliable chemical information. Consequently, the automated perception of molecular structures from atomic coordinates has become a standard task in cheminformatics. The molecules generated by such methods must be both chemically valid and reasonable to provide a reliable basis for subsequent calculations. This can be a difficult task since the provided coordinates may deviate from ideal molecular geometries due to experimental uncertainties or low resolution. Additionally, the quality of the input data often differs significantly thus making it difficult to distinguish between actual structural features and mere geometric distortions. We present a method for the generation of molecular structures from atomic coordinates based on the recently published NAOMI model. By making use of this consistent chemical description, our method is able to generate reliable results even with input data of low quality. Molecules from 363 Protein Data Bank (PDB) entries could be perceived with a success rate of 98%, a result which could not be achieved with previously described methods. The robustness of our approach has been assessed by processing all small molecules from the PDB and comparing them to reference structures. The complete data set can be processed in less than 3 min, thus showing that our approach is suitable for large scale applications.
Library Research: A Ten Year Analysis of the Library Automation Marketplace: 1981-1990.
ERIC Educational Resources Information Center
Fivecoat, Martha H.
This study focuses on the growth of the library automation market from 1981 to 1990. It draws on library automation data published annually in the Library Journal between 1981 and 1990. The data are used to examine: (1) the overall library system market trends based on the total and cumulative number of systems installed and revenue generated; (2)…
DockoMatic: automated peptide analog creation for high throughput virtual screening.
Jacob, Reed B; Bullock, Casey W; Andersen, Tim; McDougal, Owen M
2011-10-01
The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three-dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha-conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X-ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol(-1), and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week. Copyright © 2011 Wiley Periodicals, Inc.
CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.
Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali
2016-01-13
Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.
NASA Astrophysics Data System (ADS)
Nikolaev, V. N.; Titov, D. V.; Syryamkin, V. I.
2018-05-01
The comparative assessment of the level of channel capacity of different variants of the structural organization of the automated information processing systems is made. The information processing time assessment model depending on the type of standard elements and their structural organization is developed.
The terminal area automated path generation problem
NASA Technical Reports Server (NTRS)
Hsin, C.-C.
1977-01-01
The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.
An Abstraction-Based Data Model for Information Retrieval
NASA Astrophysics Data System (ADS)
McAllister, Richard A.; Angryk, Rafal A.
Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.
End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas
NASA Astrophysics Data System (ADS)
Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.
2017-11-01
Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.
van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna
2012-03-01
Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
Structural Analysis of Biodiversity
Sirovich, Lawrence; Stoeckle, Mark Y.; Zhang, Yu
2010-01-01
Large, recently-available genomic databases cover a wide range of life forms, suggesting opportunity for insights into genetic structure of biodiversity. In this study we refine our recently-described technique using indicator vectors to analyze and visualize nucleotide sequences. The indicator vector approach generates correlation matrices, dubbed Klee diagrams, which represent a novel way of assembling and viewing large genomic datasets. To explore its potential utility, here we apply the improved algorithm to a collection of almost 17000 DNA barcode sequences covering 12 widely-separated animal taxa, demonstrating that indicator vectors for classification gave correct assignment in all 11000 test cases. Indicator vector analysis revealed discontinuities corresponding to species- and higher-level taxonomic divisions, suggesting an efficient approach to classification of organisms from poorly-studied groups. As compared to standard distance metrics, indicator vectors preserve diagnostic character probabilities, enable automated classification of test sequences, and generate high-information density single-page displays. These results support application of indicator vectors for comparative analysis of large nucleotide data sets and raise prospect of gaining insight into broad-scale patterns in the genetic structure of biodiversity. PMID:20195371
Rapid SAW Sensor Development Tools
NASA Technical Reports Server (NTRS)
Wilson, William C.; Atkinson, Gary M.
2007-01-01
The lack of integrated design tools for Surface Acoustic Wave (SAW) devices has led us to develop tools for the design, modeling, analysis, and automatic layout generation of SAW devices. These tools enable rapid development of wireless SAW sensors. The tools developed have been designed to integrate into existing Electronic Design Automation (EDA) tools to take advantage of existing 3D modeling, and Finite Element Analysis (FEA). This paper presents the SAW design, modeling, analysis, and automated layout generation tools.
The Phenix Software for Automated Determination of Macromolecular Structures
Adams, Paul D.; Afonine, Pavel V.; Bunkóczi, Gábor; Chen, Vincent B.; Echols, Nathaniel; Headd, Jeffrey J.; Hung, Li-Wei; Jain, Swati; Kapral, Gary J.; Grosse Kunstleve, Ralf W.; McCoy, Airlie J.; Moriarty, Nigel W.; Oeffner, Robert D.; Read, Randy J.; Richardson, David C.; Richardson, Jane S.; Terwilliger, Thomas C.; Zwart, Peter H.
2011-01-01
X-ray crystallography is a critical tool in the study of biological systems. It is able to provide information that has been a prerequisite to understanding the fundamentals of life. It is also a method that is central to the development of new therapeutics for human disease. Significant time and effort are required to determine and optimize many macromolecular structures because of the need for manual interpretation of complex numerical data, often using many different software packages, and the repeated use of interactive three-dimensional graphics. The Phenix software package has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on automation. This has required the development of new algorithms that minimize or eliminate subjective input in favour of built-in expert-systems knowledge, the automation of procedures that are traditionally performed by hand, and the development of a computational framework that allows a tight integration between the algorithms. The application of automated methods is particularly appropriate in the field of structural proteomics, where high throughput is desired. Features in Phenix for the automation of experimental phasing with subsequent model building, molecular replacement, structure refinement and validation are described and examples given of running Phenix from both the command line and graphical user interface. PMID:21821126
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
2017-07-27
This presentation describes a problem and methodology pertaining to automated blazar light curves. Namely, optical variability patterns for blazars require the construction of light curves and in order to generate the light curves, data must be filtered before processing to ensure quality.
Development of a Graphics Based Automated Emergency Response System (AERS) for Rail Transit Systems
DOT National Transportation Integrated Search
1989-05-01
This report presents an overview of the second generation Automated Emergency Response System (AERS2). Developed to assist transit systems in responding effectively to emergency situations, AERS2 is a microcomputer-based information retrieval system ...
The First MS-Cleavable, Photo-Thiol-Reactive Cross-Linker for Protein Structural Studies
NASA Astrophysics Data System (ADS)
Iacobucci, Claudio; Piotrowski, Christine; Rehkamp, Anne; Ihling, Christian H.; Sinz, Andrea
2018-04-01
Cleavable cross-linkers are gaining increasing importance for chemical cross-linking/mass spectrometry (MS) as they permit a reliable and automated data analysis in structural studies of proteins and protein assemblies. Here, we introduce 1,3-diallylurea (DAU) as the first CID-MS/MS-cleavable, photo-thiol-reactive cross-linker. DAU is a commercially available, inexpensive reagent that efficiently undergoes an anti-Markovnikov hydrothiolation with cysteine residues in the presence of a radical initiator upon UV-A irradiation. Radical cysteine cross-linking proceeds via an orthogonal "click reaction" and yields stable alkyl sulfide products. DAU reacts at physiological pH and cross-linking reactions with peptides, and proteins can be performed at temperatures as low as 4 °C. The central urea bond is efficiently cleaved upon collisional activation during tandem MS experiments generating characteristic product ions. This improves the reliability of automated cross-link identification. Different radical initiators have been screened for the cross-linking reaction of DAU using the thiol-containing compounds cysteine and glutathione. Our concept has also been exemplified for the biologically relevant proteins bMunc13-2 and retinal guanylyl cyclase-activating protein-2. [Figure not available: see fulltext.
Schmidt, Thomas H; Kandt, Christian
2012-10-22
At the beginning of each molecular dynamics membrane simulation stands the generation of a suitable starting structure which includes the working steps of aligning membrane and protein and seamlessly accommodating the protein in the membrane. Here we introduce two efficient and complementary methods based on pre-equilibrated membrane patches, automating these steps. Using a voxel-based cast of the coarse-grained protein, LAMBADA computes a hydrophilicity profile-derived scoring function based on which the optimal rotation and translation operations are determined to align protein and membrane. Employing an entirely geometrical approach, LAMBADA is independent from any precalculated data and aligns even large membrane proteins within minutes on a regular workstation. LAMBADA is the first tool performing the entire alignment process automatically while providing the user with the explicit 3D coordinates of the aligned protein and membrane. The second tool is an extension of the InflateGRO method addressing the shortcomings of its predecessor in a fully automated workflow. Determining the exact number of overlapping lipids based on the area occupied by the protein and restricting expansion, compression and energy minimization steps to a subset of relevant lipids through automatically calculated and system-optimized operation parameters, InflateGRO2 yields optimal lipid packing and reduces lipid vacuum exposure to a minimum preserving as much of the equilibrated membrane structure as possible. Applicable to atomistic and coarse grain structures in MARTINI format, InflateGRO2 offers high accuracy, fast performance, and increased application flexibility permitting the easy preparation of systems exhibiting heterogeneous lipid composition as well as embedding proteins into multiple membranes. Both tools can be used separately, in combination with other methods, or in tandem permitting a fully automated workflow while retaining a maximum level of usage control and flexibility. To assess the performance of both methods, we carried out test runs using 22 membrane proteins of different size and transmembrane structure.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.
1987-01-01
The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.
Appendix C: Automated Vitrification of Mammalian Embryos on a Digital Microfluidic Device.
Liu, Jun; Pyne, Derek G; Abdelgawad, Mohamed; Sun, Yu
2017-01-01
This chapter introduces a digital microfluidic device that automates sample preparation for mammalian embryo vitrification. Individual microdroplets manipulated on the microfluidic device were used as microvessels to transport a single mouse embryo through a complete vitrification procedure. Advantages of this approach, compared to manual operation and channel-based microfluidic vitrification, include automated operation, cryoprotectant concentration gradient generation, and feasibility of loading and retrieval of embryos.
Computer-Generated Feedback on Student Writing
ERIC Educational Resources Information Center
Ware, Paige
2011-01-01
A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…
Recent advances in automated protein design and its future challenges.
Setiawan, Dani; Brender, Jeffrey; Zhang, Yang
2018-04-25
Protein function is determined by protein structure which is in turn determined by the corresponding protein sequence. If the rules that cause a protein to adopt a particular structure are understood, it should be possible to refine or even redefine the function of a protein by working backwards from the desired structure to the sequence. Automated protein design attempts to calculate the effects of mutations computationally with the goal of more radical or complex transformations than are accessible by experimental techniques. Areas covered: The authors give a brief overview of the recent methodological advances in computer-aided protein design, showing how methodological choices affect final design and how automated protein design can be used to address problems considered beyond traditional protein engineering, including the creation of novel protein scaffolds for drug development. Also, the authors address specifically the future challenges in the development of automated protein design. Expert opinion: Automated protein design holds potential as a protein engineering technique, particularly in cases where screening by combinatorial mutagenesis is problematic. Considering solubility and immunogenicity issues, automated protein design is initially more likely to make an impact as a research tool for exploring basic biology in drug discovery than in the design of protein biologics.
Isoda, Yuta; Sasaki, Norihiko; Kitamura, Kei; Takahashi, Shuji; Manmode, Sujit; Takeda-Okuda, Naoko; Tamura, Jun-Ichi; Nokami, Toshiki; Itoh, Toshiyuki
2017-01-01
The total synthesis of TMG-chitotriomycin using an automated electrochemical synthesizer for the assembly of carbohydrate building blocks is demonstrated. We have successfully prepared a precursor of TMG-chitotriomycin, which is a structurally-pure tetrasaccharide with typical protecting groups, through the methodology of automated electrochemical solution-phase synthesis developed by us. The synthesis of structurally well-defined TMG-chitotriomycin has been accomplished in 10-steps from a disaccharide building block.
Automated system for analyzing the activity of individual neurons
NASA Technical Reports Server (NTRS)
Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.
1993-01-01
This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.
Automated Planning for a Deep Space Communications Station
NASA Technical Reports Server (NTRS)
Estlin, Tara; Fisher, Forest; Mutz, Darren; Chien, Steve
1999-01-01
This paper describes the application of Artificial Intelligence planning techniques to the problem of antenna track plan generation for a NASA Deep Space Communications Station. Me described system enables an antenna communications station to automatically respond to a set of tracking goals by correctly configuring the appropriate hardware and software to provide the requested communication services. To perform this task, the Automated Scheduling and Planning Environment (ASPEN) has been applied to automatically produce antenna trucking plans that are tailored to support a set of input goals. In this paper, we describe the antenna automation problem, the ASPEN planning and scheduling system, how ASPEN is used to generate antenna track plans, the results of several technology demonstrations, and future work utilizing dynamic planning technology.
Management Information Systems and Organizational Structure.
ERIC Educational Resources Information Center
Cox, Bruce B.
1987-01-01
Discusses the context within which office automation takes place by using the models of the Science of Creative Intelligence and Transcendental Meditation. Organizational structures are compared to the phenomenon of the "collective consciousness" and the development of automated information systems from manual methods of organizational…
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
NASA Astrophysics Data System (ADS)
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.
Non-Uniform Sampling and J-UNIO Automation for Efficient Protein NMR Structure Determination.
Didenko, Tatiana; Proudfoot, Andrew; Dutta, Samit Kumar; Serrano, Pedro; Wüthrich, Kurt
2015-08-24
High-resolution structure determination of small proteins in solution is one of the big assets of NMR spectroscopy in structural biology. Improvements in the efficiency of NMR structure determination by advances in NMR experiments and automation of data handling therefore attracts continued interest. Here, non-uniform sampling (NUS) of 3D heteronuclear-resolved [(1)H,(1)H]-NOESY data yielded two- to three-fold savings of instrument time for structure determinations of soluble proteins. With the 152-residue protein NP_372339.1 from Staphylococcus aureus and the 71-residue protein NP_346341.1 from Streptococcus pneumonia we show that high-quality structures can be obtained with NUS NMR data, which are equally well amenable to robust automated analysis as the corresponding uniformly sampled data. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Generating Performance Models for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less
A procedure for automating CFD simulations of an inlet-bleed problem
NASA Technical Reports Server (NTRS)
Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.
1995-01-01
A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.
Analysis of technical university information system
NASA Astrophysics Data System (ADS)
Savelyev, N. A.; Boyarkin, M. A.
2018-05-01
The paper covers a set and interaction of the existing higher education institution automated control systems in φ state budgetary educational institution of higher professional education "Industrial University of Tyumen ". A structural interaction of the existing systems and their functions has been analyzed which has become a basis for identification of a number of system-related and local (related to separate modules) drawbacks of the university activities automation. The authors suggested a new structure of the automated control system, consisting of three major subsystems: management support; training and methodology support; distance and supplementary education support. Functionality for each subsystem has been defined in accordance with the educational institution automation requirements. The suggested structure of the ACS will solve the challenges facing the university during reorganization and optimization of the processes of management of the institution activities as a whole.
Automating Visualization Service Generation with the WATT Compiler
NASA Astrophysics Data System (ADS)
Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.
2007-12-01
As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.
Automated and fast building of three-dimensional RNA structures.
Zhao, Yunjie; Huang, Yangyu; Gong, Zhou; Wang, Yanjie; Man, Jianfen; Xiao, Yi
2012-01-01
Building tertiary structures of non-coding RNA is required to understand their functions and design new molecules. Current algorithms of RNA tertiary structure prediction give satisfactory accuracy only for small size and simple topology and many of them need manual manipulation. Here, we present an automated and fast program, 3dRNA, for RNA tertiary structure prediction with reasonable accuracy for RNAs of larger size and complex topology.
Automated real-time software development
NASA Technical Reports Server (NTRS)
Jones, Denise R.; Walker, Carrie K.; Turkovich, John J.
1993-01-01
A Computer-Aided Software Engineering (CASE) system has been developed at the Charles Stark Draper Laboratory (CSDL) under the direction of the NASA Langley Research Center. The CSDL CASE tool provides an automated method of generating source code and hard copy documentation from functional application engineering specifications. The goal is to significantly reduce the cost of developing and maintaining real-time scientific and engineering software while increasing system reliability. This paper describes CSDL CASE and discusses demonstrations that used the tool to automatically generate real-time application code.
Ethics, finance, and automation: a preliminary survey of problems in high frequency trading.
Davis, Michael; Kumiega, Andrew; Van Vliet, Ben
2013-09-01
All of finance is now automated, most notably high frequency trading. This paper examines the ethical implications of this fact. As automation is an interdisciplinary endeavor, we argue that the interfaces between the respective disciplines can lead to conflicting ethical perspectives; we also argue that existing disciplinary standards do not pay enough attention to the ethical problems automation generates. Conflicting perspectives undermine the protection those who rely on trading should have. Ethics in finance can be expanded to include organizational and industry-wide responsibilities to external market participants and society. As a starting point, quality management techniques can provide a foundation for a new cross-disciplinary ethical standard in the age of automation.
Liu, Yijin; Meirer, Florian; Williams, Phillip A.; Wang, Junyue; Andrews, Joy C.; Pianetta, Piero
2012-01-01
Transmission X-ray microscopy (TXM) has been well recognized as a powerful tool for non-destructive investigation of the three-dimensional inner structure of a sample with spatial resolution down to a few tens of nanometers, especially when combined with synchrotron radiation sources. Recent developments of this technique have presented a need for new tools for both system control and data analysis. Here a software package developed in MATLAB for script command generation and analysis of TXM data is presented. The first toolkit, the script generator, allows automating complex experimental tasks which involve up to several thousand motor movements. The second package was designed to accomplish computationally intense tasks such as data processing of mosaic and mosaic tomography datasets; dual-energy contrast imaging, where data are recorded above and below a specific X-ray absorption edge; and TXM X-ray absorption near-edge structure imaging datasets. Furthermore, analytical and iterative tomography reconstruction algorithms were implemented. The compiled software package is freely available. PMID:22338691
FigSum: automatically generating structured text summaries for figures in biomedical literature.
Agarwal, Shashank; Yu, Hong
2009-11-14
Figures are frequently used in biomedical articles to support research findings; however, they are often difficult to comprehend based on their legends alone and information from the full-text articles is required to fully understand them. Previously, we found that the information associated with a single figure is distributed throughout the full-text article the figure appears in. Here, we develop and evaluate a figure summarization system - FigSum, which aggregates this scattered information to improve figure comprehension. For each figure in an article, FigSum generates a structured text summary comprising one sentence from each of the four rhetorical categories - Introduction, Methods, Results and Discussion (IMRaD). The IMRaD category of sentences is predicted by an automated machine learning classifier. Our evaluation shows that FigSum captures 53% of the sentences in the gold standard summaries annotated by biomedical scientists and achieves an average ROUGE-1 score of 0.70, which is higher than a baseline system.
FigSum: Automatically Generating Structured Text Summaries for Figures in Biomedical Literature
Agarwal, Shashank; Yu, Hong
2009-01-01
Figures are frequently used in biomedical articles to support research findings; however, they are often difficult to comprehend based on their legends alone and information from the full-text articles is required to fully understand them. Previously, we found that the information associated with a single figure is distributed throughout the full-text article the figure appears in. Here, we develop and evaluate a figure summarization system – FigSum, which aggregates this scattered information to improve figure comprehension. For each figure in an article, FigSum generates a structured text summary comprising one sentence from each of the four rhetorical categories – Introduction, Methods, Results and Discussion (IMRaD). The IMRaD category of sentences is predicted by an automated machine learning classifier. Our evaluation shows that FigSum captures 53% of the sentences in the gold standard summaries annotated by biomedical scientists and achieves an average ROUGE-1 score of 0.70, which is higher than a baseline system. PMID:20351812
Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang
2017-06-01
To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.
Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael
2010-01-01
A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424
A novel artificial intelligence method for weekly dietary menu planning.
Gaál, B; Vassányi, I; Kozmann, G
2005-01-01
Menu planning is an important part of personalized lifestyle counseling. The paper describes the results of an automated menu generator (MenuGene) of the web-based lifestyle counseling system Cordelia that provides personalized advice to prevent cardiovascular diseases. The menu generator uses genetic algorithms to prepare weekly menus for web users. The objectives are derived from personal medical data collected via forms in Cordelia, combined with general nutritional guidelines. The weekly menu is modeled as a multilevel structure. Results show that the genetic algorithm-based method succeeds in planning dietary menus that satisfy strict numerical constraints on every nutritional level (meal, daily basis, weekly basis). The rule-based assessment proved capable of manipulating the mean occurrence of the nutritional components thus providing a method for adjusting the variety and harmony of the menu plans. By splitting the problem into well determined sub-problems, weekly menu plans that satisfy nutritional constraints and have well assorted components can be generated with the same method that is for daily and meal plan generation.
Automation of Ocean Product Metrics
2008-09-30
Presented in: Ocean Sciences 2008 Conf., 5 Mar 2008. Shriver, J., J. D. Dykes, and J. Fabre: Automation of Operational Ocean Product Metrics. Presented in 2008 EGU General Assembly , 14 April 2008. 9 ...processing (multiple data cuts per day) and multiple-nested models. Routines for generating automated evaluations of model forecast statistics will be...developed and pre-existing tools will be collected to create a generalized tool set, which will include user-interface tools to the metrics data
Cavalier, Etienne; Betea, Daniela; Schleck, Marie-Louise; Gadisseur, Romy; Vroonen, Laurent; Delanaye, Pierre; Daly, Adrian F; Beckers, Albert
2014-03-01
Parathyroid carcinoma (PCa) is rare and often difficult to differentiate initially from benign disease. Because PCa oversecretes amino PTH that is detected by third-generation but not by second-generation PTH assays, the normal 3rd/2nd generation PTH ratio (<1) is inverted in PCa (ie, >1). The objective of the investigation was to study the utility and advantages of automated 3rd/2nd generation PTH ratio measurements using the Liaison XL platform over existing manual techniques. The study was conducted at a tertiary-referral academic center. This was a retrospective laboratory study. Eleven patients with advanced PCa (mean age 56.0 y). The controls were patients with primary-hyperparathyroidism (n = 144; mean age 53.8 y), renal transplantation (n = 41; mean age 50.6 y), hemodialysis (n = 80; mean age 65.2 y), and healthy elderly subjects (n = 40; mean age 72.6 y). The median (interquartile range) 3rd/2nd generation PTH ratio was 1.16 (1.10-1.38) in the PCa group, which was significantly higher than the control groups: hemodialysis: 0.74 (0.71-0.75); renal transplant: 0.77 (0.73-0.79); primary hyperparathyroidism: 0.76 (0.74-0.78); healthy elderly: 0.80 (0.74-0.83). An inverted 3rd/2nd-generation PTH ratio (>1) was seen in 9 of 11 PCa patients (81.8%) and in 7 of 305 controls (2.3%): 3 of 80 hemodialysis (3.8%), and 4 of 144 primary-hyperparathyroidism patients (2.8%). Of four PCa patients who had a normal PTH ratio with the manual method, two had an inverted 3rd/2nd-generation PTH ratio with the automated method. Study of the 3rd/2nd-generation PTH ratio in large patient populations should be feasible using a mainstream automated platform like the Liaison XL. The current study confirms the utility of the inverted 3rd/2nd-generation PTH ratio as a marker of PCa (sensitivity: 81.8%; specificity: 97.3%).
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
Isoda, Yuta; Sasaki, Norihiko; Kitamura, Kei; Takahashi, Shuji; Manmode, Sujit; Takeda-Okuda, Naoko; Tamura, Jun-ichi
2017-01-01
The total synthesis of TMG-chitotriomycin using an automated electrochemical synthesizer for the assembly of carbohydrate building blocks is demonstrated. We have successfully prepared a precursor of TMG-chitotriomycin, which is a structurally-pure tetrasaccharide with typical protecting groups, through the methodology of automated electrochemical solution-phase synthesis developed by us. The synthesis of structurally well-defined TMG-chitotriomycin has been accomplished in 10-steps from a disaccharide building block. PMID:28684973
Generating Test Templates via Automated Theorem Proving
NASA Technical Reports Server (NTRS)
Kancherla, Mani Prasad
1997-01-01
Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.
A pattern-based method to automate mask inspection files
NASA Astrophysics Data System (ADS)
Kamal Baharin, Ezni Aznida Binti; Muhsain, Mohamad Fahmi Bin; Ahmad Ibrahim, Muhamad Asraf Bin; Ahmad Noorhani, Ahmad Nurul Ihsan Bin; Sweis, Jason; Lai, Ya-Chieh; Hurat, Philippe
2017-03-01
Mask inspection is a critical step in the mask manufacturing process in order to ensure all dimensions printed are within the needed tolerances. This becomes even more challenging as the device nodes shrink and the complexity of the tapeout increases. Thus, the amount of measurement points and their critical dimension (CD) types are increasing to ensure the quality of the mask. In addition to the mask quality, there is a significant amount of manpower needed when the preparation and debugging of this process are not automated. By utilizing a novel pattern search technology with the ability to measure and report match region scan-line (edge) measurements, we can create a flow to find, measure and mark all metrology locations of interest and provide this automated report to the mask shop for inspection. A digital library is created based on the technology product and node which contains the test patterns to be measured. This paper will discuss how these digital libraries will be generated and then utilized. As a time-critical part of the manufacturing process, this can also reduce the data preparation cycle time, minimize the amount of manual/human error in naming and measuring the various locations, reduce the risk of wrong/missing CD locations, and reduce the amount of manpower needed overall. We will also review an example pattern and how the reporting structure to the mask shop can be processed. This entire process can now be fully automated.
Collecting and Animating Online Satellite Images.
ERIC Educational Resources Information Center
Irons, Ralph
1995-01-01
Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Structural Pattern Recognition Techniques for Data Retrieval in Massive Fusion Databases
NASA Astrophysics Data System (ADS)
Vega, J.; Murari, A.; Rattá, G. A.; Castro, P.; Pereira, A.; Portas, A.
2008-03-01
Diagnostics of present day reactor class fusion experiments, like the Joint European Torus (JET), generate thousands of signals (time series and video images) in each discharge. There is a direct correspondence between the physical phenomena taking place in the plasma and the set of structural shapes (patterns) that they form in the signals: bumps, unexpected amplitude changes, abrupt peaks, periodic components, high intensity zones or specific edge contours. A major difficulty related to data analysis is the identification, in a rapid and automated way, of a set of discharges with comparable behavior, i.e. discharges with "similar" patterns. Pattern recognition techniques are efficient tools to search for similar structural forms within the database in a fast an intelligent way. To this end, classification systems must be developed to be used as indexation methods to directly fetch the more similar patterns.
Large image microscope array for the compilation of multimodality whole organ image databases.
Namati, Eman; De Ryk, Jessica; Thiesse, Jacqueline; Towfic, Zaid; Hoffman, Eric; Mclennan, Geoffrey
2007-11-01
Three-dimensional, structural and functional digital image databases have many applications in education, research, and clinical medicine. However, to date, apart from cryosectioning, there have been no reliable means to obtain whole-organ, spatially conserving histology. Our aim was to generate a system capable of acquiring high-resolution images, featuring microscopic detail that could still be spatially correlated to the whole organ. To fulfill these objectives required the construction of a system physically capable of creating very fine whole-organ sections and collecting high-magnification and resolution digital images. We therefore designed a large image microscope array (LIMA) to serially section and image entire unembedded organs while maintaining the structural integrity of the tissue. The LIMA consists of several integrated components: a novel large-blade vibrating microtome, a 1.3 megapixel peltier cooled charge-coupled device camera, a high-magnification microscope, and a three axis gantry above the microtome. A custom control program was developed to automate the entire sectioning and automated raster-scan imaging sequence. The system is capable of sectioning unembedded soft tissue down to a thickness of 40 microm at specimen dimensions of 200 x 300 mm to a total depth of 350 mm. The LIMA system has been tested on fixed lung from sheep and mice, resulting in large high-quality image data sets, with minimal distinguishable disturbance in the delicate alveolar structures. Copyright 2007 Wiley-Liss, Inc.
ABodyBuilder: Automated antibody structure prediction with data–driven accuracy estimation
Leem, Jinwoo; Dunbar, James; Georges, Guy; Shi, Jiye; Deane, Charlotte M.
2016-01-01
ABSTRACT Computational modeling of antibody structures plays a critical role in therapeutic antibody design. Several antibody modeling pipelines exist, but no freely available methods currently model nanobodies, provide estimates of expected model accuracy, or highlight potential issues with the antibody's experimental development. Here, we describe our automated antibody modeling pipeline, ABodyBuilder, designed to overcome these issues. The algorithm itself follows the standard 4 steps of template selection, orientation prediction, complementarity-determining region (CDR) loop modeling, and side chain prediction. ABodyBuilder then annotates the ‘confidence’ of the model as a probability that a component of the antibody (e.g., CDRL3 loop) will be modeled within a root–mean square deviation threshold. It also flags structural motifs on the model that are known to cause issues during in vitro development. ABodyBuilder was tested on 4 separate datasets, including the 11 antibodies from the Antibody Modeling Assessment–II competition. ABodyBuilder builds models that are of similar quality to other methodologies, with sub–Angstrom predictions for the ‘canonical’ CDR loops. Its ability to model nanobodies, and rapidly generate models (∼30 seconds per model) widens its potential usage. ABodyBuilder can also help users in decision–making for the development of novel antibodies because it provides model confidence and potential sequence liabilities. ABodyBuilder is freely available at http://opig.stats.ox.ac.uk/webapps/abodybuilder. PMID:27392298
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature, yet while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our workmore » is whether and to what extent the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semiautomated creation of a thermodynamic property database.« less
Spaceport Command and Control System Automated Testing
NASA Technical Reports Server (NTRS)
Stein, Meriel
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
Spaceport Command and Control System Automation Testing
NASA Technical Reports Server (NTRS)
Hwang, Andrew
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
A Manual Segmentation Tool for Three-Dimensional Neuron Datasets.
Magliaro, Chiara; Callara, Alejandro L; Vanello, Nicola; Ahluwalia, Arti
2017-01-01
To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack. Users can eliminate unwanted regions or split structures (i.e., branches from different neurons that are too close each other, but, to the experienced eye, clearly belong to a unique cell), to view the object in 3D and save the results obtained. The tool can be used for testing the performance of a single-neuron segmentation algorithm or to extract complex objects, where the available automated methods still fail. Here we describe the software's main features and then show an example of how ManSegTool can be used to segment neuron images acquired using a confocal microscope. In particular, expert neuroscientists were asked to segment different neurons from which morphometric variables were subsequently extracted as a benchmark for precision. In addition, a literature-defined index for evaluating the goodness of segmentation was used as a benchmark for accuracy. Neocortical layer axons from a DIADEM challenge dataset were also segmented with ManSegTool and compared with the manual "gold-standard" generated for the competition.
Comparison of BrainTool to other UML modeling and model transformation tools
NASA Astrophysics Data System (ADS)
Nikiforova, Oksana; Gusarovs, Konstantins
2017-07-01
In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
Peng, Chen; Frommlet, Alexandra; Perez, Manuel; Cobas, Carlos; Blechschmidt, Anke; Dominguez, Santiago; Lingel, Andreas
2016-04-14
NMR binding assays are routinely applied in hit finding and validation during early stages of drug discovery, particularly for fragment-based lead generation. To this end, compound libraries are screened by ligand-observed NMR experiments such as STD, T1ρ, and CPMG to identify molecules interacting with a target. The analysis of a high number of complex spectra is performed largely manually and therefore represents a limiting step in hit generation campaigns. Here we report a novel integrated computational procedure that processes and analyzes ligand-observed proton and fluorine NMR binding data in a fully automated fashion. A performance evaluation comparing automated and manual analysis results on (19)F- and (1)H-detected data sets shows that the program delivers robust, high-confidence hit lists in a fraction of the time needed for manual analysis and greatly facilitates visual inspection of the associated NMR spectra. These features enable considerably higher throughput, the assessment of larger libraries, and shorter turn-around times.
Flight-deck automation - Promises and problems
NASA Technical Reports Server (NTRS)
Wiener, E. L.; Curry, R. E.
1980-01-01
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
About development of automation control systems
NASA Astrophysics Data System (ADS)
Myshlyaev, L. P.; Wenger, K. G.; Ivushkin, K. A.; Makarov, V. N.
2018-05-01
The shortcomings of approaches to the development of modern control automation systems and ways of their improvement are given: the correct formation of objects for study and optimization; a joint synthesis of control objects and control systems, an increase in the structural diversity of the elements of control systems. Diagrams of control systems with purposefully variable structure of their elements are presented. Structures of control algorithms for an object with a purposefully variable structure are given.
Panjikar, Santosh; Parthasarathy, Venkataraman; Lamzin, Victor S; Weiss, Manfred S; Tucker, Paul A
2005-04-01
The EMBL-Hamburg Automated Crystal Structure Determination Platform is a system that combines a number of existing macromolecular crystallographic computer programs and several decision-makers into a software pipeline for automated and efficient crystal structure determination. The pipeline can be invoked as soon as X-ray data from derivatized protein crystals have been collected and processed. It is controlled by a web-based graphical user interface for data and parameter input, and for monitoring the progress of structure determination. A large number of possible structure-solution paths are encoded in the system and the optimal path is selected by the decision-makers as the structure solution evolves. The processes have been optimized for speed so that the pipeline can be used effectively for validating the X-ray experiment at a synchrotron beamline.
Managing Automation: A Process, Not a Project.
ERIC Educational Resources Information Center
Hoffmann, Ellen
1988-01-01
Discussion of issues in management of library automation includes: (1) hardware, including systems growth and contracts; (2) software changes, vendor relations, local systems, and microcomputer software; (3) item and authority databases; (4) automation and library staff, organizational structure, and managing change; and (5) environmental issues,…
NASA Astrophysics Data System (ADS)
Mandal, Chhabinath; Linthicum, D. Scott
1993-04-01
A modelling algorithm (PROGEN) for the generation of complete protein atomic coordinates from only the α-carbon coordinates is described. PROGEN utilizes an optimal geometry parameter (OGP) database for the positioning of atoms for each amino acid of the polypeptide model. The OGP database was established by examining the statistical correlations between 23 different intra-peptide and inter-peptide geometric parameters relative to the α-carbon distances for each amino acid in a library of 19 known proteins from the Brookhaven Protein Database (BPDB). The OGP files for specific amino acids and peptides were used to generate the atomic positions, with respect to α-carbons, for main-chain and side-chain atoms in the modelled structure. Refinement of the initial model was accomplished using energy minimization (EM) and molecular dynamics techniques. PROGEN was tested using 60 known proteins in the BPDB, representing a wide spectrum of primary and secondary structures. Comparison between PROGEN models and BPDB crystal reference structures gave r.m.s.d. values for peptide main-chain atoms between 0.29 and 0.76 Å, with a grand average of 0.53 Å for all 60 models. The r.m.s.d. for all non-hydrogen atoms ranged between 1.44 and 1.93 Å for the 60 polypeptide models. PROGEN was also able to make the correct assignment of cis- or trans-proline configurations in the protein structures examined. PROGEN offers a fully automatic building and refinement procedure and requires no special or specific structural considerations for the protein to be modelled.
2008-12-01
clearly observed in the game industry ( Introversion , 2008). Currently there are many tools available to assist in automating the production of large...Graphics and Interactive Techniques, Melbourne, Australia, February 11 – 14. Introversion Software, 2008: Procedural Content Generation. http
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Automating the Generation of the Cassini Tour Atlas Database
NASA Technical Reports Server (NTRS)
Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.
2010-01-01
The Tour Atlas is a large database of geometrical tables, plots, and graphics used by Cassini science planning engineers and scientists primarily for science observation planning. Over time, as the contents of the Tour Atlas grew, the amount of time it took to recreate the Tour Atlas similarly grew--to the point that it took one person a week of effort. When Cassini tour designers estimated that they were going to create approximately 30 candidate Extended Mission trajectories--which needed to be analyzed for science return in a short amount of time--it became a necessity to automate. We report on the automation methodology that reduced the amount of time it took one person to (re)generate a Tour Atlas from a week to, literally, one UNIX command.
Ruscio, D; Bos, A J; Ciceri, M R
2017-06-01
The interaction with Advanced Driver Assistance Systems has several positive implications for road safety, but also some potential downsides such as mental workload and automation complacency. Malleable attentional resources allocation theory describes two possible processes that can generate workload in interaction with advanced assisting devices. The purpose of the present study is to determine if specific analysis of the different modalities of autonomic control of nervous system can be used to discriminate different potential workload processes generated during assisted-driving tasks and automation complacency situations. Thirty-five drivers were tested in a virtual scenario while using head-up advanced warning assistance system. Repeated MANOVA were used to examine changes in autonomic activity across a combination of different user interactions generated by the advanced assistance system: (1) expected take-over request without anticipatory warning; (2) expected take-over request with two-second anticipatory warning; (3) unexpected take-over request with misleading warning; (4) unexpected take-over request without warning. Results shows that analysis of autonomic modulations can discriminate two different resources allocation processes, related to different behavioral performances. The user's interaction that required divided attention under expected situations produced performance enhancement and reciprocally-coupled parasympathetic inhibition with sympathetic activity. At the same time, supervising interactions that generated automation complacency were described specifically by uncoupled sympathetic activation. Safety implications for automated assistance systems developments are considered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated Assignment of MS/MS Cleavable Cross-Links in Protein 3D-Structure Analysis
NASA Astrophysics Data System (ADS)
Götze, Michael; Pettelkau, Jens; Fritzsche, Romy; Ihling, Christian H.; Schäfer, Mathias; Sinz, Andrea
2015-01-01
CID-MS/MS cleavable cross-linkers hold an enormous potential for an automated analysis of cross-linked products, which is essential for conducting structural proteomics studies. The created characteristic fragment ion patterns can easily be used for an automated assignment and discrimination of cross-linked products. To date, there are only a few software solutions available that make use of these properties, but none allows for an automated analysis of cleavable cross-linked products. The MeroX software fills this gap and presents a powerful tool for protein 3D-structure analysis in combination with MS/MS cleavable cross-linkers. We show that MeroX allows an automatic screening of characteristic fragment ions, considering static and variable peptide modifications, and effectively scores different types of cross-links. No manual input is required for a correct assignment of cross-links and false discovery rates are calculated. The self-explanatory graphical user interface of MeroX provides easy access for an automated cross-link search platform that is compatible with commonly used data file formats, enabling analysis of data originating from different instruments. The combination of an MS/MS cleavable cross-linker with a dedicated software tool for data analysis provides an automated workflow for 3D-structure analysis of proteins. MeroX is available at
Integrated Communications and Work Efficiency: Impacts on Organizational Structure and Power.
ERIC Educational Resources Information Center
Wigand, Rolf T.
This paper reviews the work environment surrounding integrated office systems, synthesizes the known effects of automated office technologies, and discusses their impact on work efficiency in office environments. Particular attention is given to the effect of automated technologies on networks, workflow/processes, and organizational structure and…
Peak picking multidimensional NMR spectra with the contour geometry based algorithm CYPICK.
Würz, Julia M; Güntert, Peter
2017-01-01
The automated identification of signals in multidimensional NMR spectra is a challenging task, complicated by signal overlap, noise, and spectral artifacts, for which no universally accepted method is available. Here, we present a new peak picking algorithm, CYPICK, that follows, as far as possible, the manual approach taken by a spectroscopist who analyzes peak patterns in contour plots of the spectrum, but is fully automated. Human visual inspection is replaced by the evaluation of geometric criteria applied to contour lines, such as local extremality, approximate circularity (after appropriate scaling of the spectrum axes), and convexity. The performance of CYPICK was evaluated for a variety of spectra from different proteins by systematic comparison with peak lists obtained by other, manual or automated, peak picking methods, as well as by analyzing the results of automated chemical shift assignment and structure calculation based on input peak lists from CYPICK. The results show that CYPICK yielded peak lists that compare in most cases favorably to those obtained by other automated peak pickers with respect to the criteria of finding a maximal number of real signals, a minimal number of artifact peaks, and maximal correctness of the chemical shift assignments and the three-dimensional structure obtained by fully automated assignment and structure calculation.
2006-06-01
levels of automation applied as per Figure 13. .................................. 60 x THIS PAGE...models generated for this thesis were set to run for 60 minutes. To run the simulation for the set time, the analyst provides a random number seed to...1984). The IMPRINT 59 workload value of 60 has been used by a consensus of workload modeling SMEs to represent the ‘high’ threshold, while the
NASA Astrophysics Data System (ADS)
Anders, Niels; Suomalainen, Juha; Seeger, Manuel; Keesstra, Saskia; Bartholomeus, Harm; Paron, Paolo
2014-05-01
The recent increase of performance and endurance of electronically controlled flying platforms, such as multi-copters and fixed-wing airplanes, and decreasing size and weight of different sensors and batteries leads to increasing popularity of Unmanned Aerial Systems (UAS) for scientific purposes. Modern workflows that implement UAS include guided flight plan generation, 3D GPS navigation for fully automated piloting, and automated processing with new techniques such as "Structure from Motion" photogrammetry. UAS are often equipped with normal RGB cameras, multi- and hyperspectral sensors, radar, or other sensors, and provide a cheap and flexible solution for creating multi-temporal data sets. UAS revolutionized multi-temporal research allowing new applications related to change analysis and process monitoring. The EGU General Assembly 2014 is hosting a session on platforms, sensors and applications with UAS in soil science and geomorphology. This presentation briefly summarizes the outcome of this session, addressing the current state and future challenges of small-platform data acquisition in soil science and geomorphology.
Tiret, Brice; Shestov, Alexander A.; Valette, Julien; Henry, Pierre-Gilles
2017-01-01
Most current brain metabolic models are not capable of taking into account the dynamic isotopomer information available from fine structure multiplets in 13C spectra, due to the difficulty of implementing such models. Here we present a new approach that allows automatic implementation of multi-compartment metabolic models capable of fitting any number of 13C isotopomer curves in the brain. The new automated approach also makes it possible to quickly modify and test new models to best describe the experimental data. We demonstrate the power of the new approach by testing the effect of adding separate pyruvate pools in astrocytes and neurons, and adding a vesicular neuronal glutamate pool. Including both changes reduced the global fit residual by half and pointed to dilution of label prior to entry into the astrocytic TCA cycle as the main source of glutamine dilution. The glutamate-glutamine cycle rate was particularly sensitive to changes in the model. PMID:26553273
Licurse, Mindy Y; Lalevic, Darco; Zafar, Hanna M; Schnall, Mitchell D; Cook, Tessa S
2017-04-01
An automated radiology recommendation-tracking engine for incidental focal masses in the liver, pancreas, kidneys, and adrenal glands was launched within our institution in July 2013. For 2 years, the majority of CT, MR, and US examination reports generated within our health system were mined by the engine. However, the need to expand the system beyond the initial four organs was soon identified. In July 2015, the second phase of the system was implemented and expanded to include additional anatomic structures in the abdomen and pelvis, as well as to provide non-radiology and non-imaging options for follow-up. The most frequent organs with incidental findings, outside of the original four, included the ovaries and the endometrium, which also correlated to the most frequently ordered imaging follow-up study of pelvic ultrasound and non-imaging follow-up study of endometrial biopsies, respectively. The second phase expansion has demonstrated new venues for augmenting and improving radiologist roles in optimal communication and management of incidental findings.
a Novel Method for Automation of 3d Hydro Break Line Generation from LIDAR Data Using Matlab
NASA Astrophysics Data System (ADS)
Toscano, G. J.; Gopalam, U.; Devarajan, V.
2013-08-01
Water body detection is necessary to generate hydro break lines, which are in turn useful in creating deliverables such as TINs, contours, DEMs from LiDAR data. Hydro flattening follows the detection and delineation of water bodies (lakes, rivers, ponds, reservoirs, streams etc.) with hydro break lines. Manual hydro break line generation is time consuming and expensive. Accuracy and processing time depend on the number of vertices marked for delineation of break lines. Automation with minimal human intervention is desired for this operation. This paper proposes using a novel histogram analysis of LiDAR elevation data and LiDAR intensity data to automatically detect water bodies. Detection of water bodies using elevation information was verified by checking against LiDAR intensity data since the spectral reflectance of water bodies is very small compared with that of land and vegetation in near infra-red wavelength range. Detection of water bodies using LiDAR intensity data was also verified by checking against LiDAR elevation data. False detections were removed using morphological operations and 3D break lines were generated. Finally, a comparison of automatically generated break lines with their semi-automated/manual counterparts was performed to assess the accuracy of the proposed method and the results were discussed.
NASA Astrophysics Data System (ADS)
Winkel, D.; Bol, G. H.; van Asselen, B.; Hes, J.; Scholten, V.; Kerkmeijer, L. G. W.; Raaymakers, B. W.
2016-12-01
To develop an automated radiotherapy treatment planning and optimization workflow to efficiently create patient specifically optimized clinical grade treatment plans for prostate cancer and to implement it in clinical practice. A two-phased planning and optimization workflow was developed to automatically generate 77Gy 5-field simultaneously integrated boost intensity modulated radiation therapy (SIB-IMRT) plans for prostate cancer treatment. A retrospective planning study (n = 100) was performed in which automatically and manually generated treatment plans were compared. A clinical pilot (n = 21) was performed to investigate the usability of our method. Operator time for the planning process was reduced to <5 min. The retrospective planning study showed that 98 plans met all clinical constraints. Significant improvements were made in the volume receiving 72Gy (V72Gy) for the bladder and rectum and the mean dose of the bladder and the body. A reduced plan variance was observed. During the clinical pilot 20 automatically generated plans met all constraints and 17 plans were selected for treatment. The automated radiotherapy treatment planning and optimization workflow is capable of efficiently generating patient specifically optimized and improved clinical grade plans. It has now been adopted as the current standard workflow in our clinic to generate treatment plans for prostate cancer.
NASA Technical Reports Server (NTRS)
Corker, Kevin; Pisanich, Gregory; Condon, Gregory W. (Technical Monitor)
1995-01-01
A predictive model of human operator performance (flight crew and air traffic control (ATC)) has been developed and applied in order to evaluate the impact of automation developments in flight management and air traffic control. The model is used to predict the performance of a two person flight crew and the ATC operators generating and responding to clearances aided by the Center TRACON Automation System (CTAS). The purpose of the modeling is to support evaluation and design of automated aids for flight management and airspace management and to predict required changes in procedure both air and ground in response to advancing automation in both domains. Additional information is contained in the original extended abstract.
Raza, Ali S.; Zhang, Xian; De Moraes, Carlos G. V.; Reisman, Charles A.; Liebmann, Jeffrey M.; Ritch, Robert; Hood, Donald C.
2014-01-01
Purpose. To improve the detection of glaucoma, techniques for assessing local patterns of damage and for combining structure and function were developed. Methods. Standard automated perimetry (SAP) and frequency-domain optical coherence tomography (fdOCT) data, consisting of macular retinal ganglion cell plus inner plexiform layer (mRGCPL) as well as macular and optic disc retinal nerve fiber layer (mRNFL and dRNFL) thicknesses, were collected from 52 eyes of 52 healthy controls and 156 eyes of 96 glaucoma suspects and patients. In addition to generating simple global metrics, SAP and fdOCT data were searched for contiguous clusters of abnormal points and converted to a continuous metric (pcc). The pcc metric, along with simpler methods, was used to combine the information from the SAP and fdOCT. The performance of different methods was assessed using the area under receiver operator characteristic curves (AROC scores). Results. The pcc metric performed better than simple global measures for both the fdOCT and SAP. The best combined structure-function metric (mRGCPL&SAP pcc, AROC = 0.868 ± 0.032) was better (statistically significant) than the best metrics for independent measures of structure and function. When SAP was used as part of the inclusion and exclusion criteria, AROC scores increased for all metrics, including the best combined structure-function metric (AROC = 0.975 ± 0.014). Conclusions. A combined structure-function metric improved the detection of glaucomatous eyes. Overall, the primary sources of value-added for glaucoma detection stem from the continuous cluster search (the pcc), the mRGCPL data, and the combination of structure and function. PMID:24408977
Automated structure and flow measurement - a promising tool in nailfold capillaroscopy.
Berks, Michael; Dinsdale, Graham; Murray, Andrea; Moore, Tonia; Manning, Joanne; Taylor, Chris; Herrick, Ariane L
2018-07-01
Despite increasing interest in nailfold capillaroscopy, objective measures of capillary structure and blood flow have been little studied. We aimed to test the hypothesis that structural measurements, capillary flow, and a combined measure have the predictive power to separate patients with systemic sclerosis (SSc) from those with primary Raynaud's phenomenon (PRP) and healthy controls (HC). 50 patients with SSc, 12 with PRP, and 50 HC were imaged using a novel capillaroscopy system that generates high-quality nailfold images and provides fully-automated measurements of capillary structure and blood flow (capillary density, mean width, maximum width, shape score, derangement and mean flow velocity). Population statistics summarise the differences between the three groups. Areas under ROC curves (A Z ) were used to measure classification accuracy when assigning individuals to SSc and HC/PRP groups. Statistically significant differences in group means were found between patients with SSc and both HC and patients with PRP, for all measurements, e.g. mean width (μm) ± SE: 15.0 ± 0.71, 12.7 ± 0.74 and 11.8 ± 0.23 for SSc, PRP and HC respectively. Combining the five structural measurements gave better classification (A Z = 0.919 ± 0.026) than the best single measurement (mean width, A Z = 0.874 ± 0.043), whilst adding flow further improved classification (A Z = 0.930 ± 0.024). Structural and blood flow measurements are both able to distinguish patients with SSc from those with PRP/HC. Importantly, these hold promise as clinical trial outcome measures for treatments aimed at improving finger blood flow or microvascular remodelling. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Automated problem list generation and physicians perspective from a pilot study.
Devarakonda, Murthy V; Mehta, Neil; Tsou, Ching-Huei; Liang, Jennifer J; Nowacki, Amy S; Jelovsek, John Eric
2017-09-01
An accurate, comprehensive and up-to-date problem list can help clinicians provide patient-centered care. Unfortunately, problem lists created and maintained in electronic health records by providers tend to be inaccurate, duplicative and out of date. With advances in machine learning and natural language processing, it is possible to automatically generate a problem list from the data in the EHR and keep it current. In this paper, we describe an automated problem list generation method and report on insights from a pilot study of physicians' assessment of the generated problem lists compared to existing providers-curated problem lists in an institution's EHR system. The natural language processing and machine learning-based Watson 1 method models clinical thinking in identifying a patient's problem list using clinical notes and structured data. This pilot study assessed the Watson method and included 15 randomly selected, de-identified patient records from a large healthcare system that were each planned to be reviewed by at least two internal medicine physicians. The physicians created their own problem lists, and then evaluated the overall usefulness of their own problem lists (P), Watson generated problem lists (W), and the existing EHR problem lists (E) on a 10-point scale. The primary outcome was pairwise comparisons of P, W, and E. Six out of the 10 invited physicians completed 27 assessments of P, W, and E, and in process evaluated 732 Watson generated problems and 444 problems in the EHR system. As expected, physicians rated their own lists, P, highest. However, W was rated higher than E. Among 89% of assessments, Watson identified at least one important problem that physicians missed. Cognitive computing systems like this Watson system hold the potential for accurate, problem-list-centered summarization of patient records, potentially leading to increased efficiency, better clinical decision support, and improved quality of patient care. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Automated Assistance in the Formulation of Search Statements for Bibliographic Databases.
ERIC Educational Resources Information Center
Oakes, Michael P.; Taylor, Malcolm J.
1998-01-01
Reports on the design of an automated query system to help pharmacologists access the Derwent Drug File (DDF). Topics include knowledge types; knowledge representation; role of the search intermediary; vocabulary selection, thesaurus, and user input in natural language; browsing; evaluation methods; and search statement generation for the World…
Automated Guidance from Physiological Sensing to Reduce Thermal-Work Strain Levels on a Novel Task
USDA-ARS?s Scientific Manuscript database
This experiment demonstrated that automated pace guidance generated from real-time physiological monitoring allowed least stressful completion of a timed (60 minute limit) 5 mile treadmill exercise. An optimal pacing policy was estimated from a Markov decision process that balanced the goals of the...
Automated road segment creation process : a report on research sponsored by SaferSim.
DOT National Transportation Integrated Search
2016-08-01
This report provides a summary of a set of tools that can be used to automate the process : of generating roadway surfaces from alignment and texture information. The tools developed : were created in Python 3.x and rely on the availability of two da...
Designing Automated Guidance to Promote Productive Revision of Science Explanations
ERIC Educational Resources Information Center
Tansomboon, Charissa; Gerard, Libby F.; Vitale, Jonathan M.; Linn, Marcia C.
2017-01-01
Supporting students to revise their written explanations in science can help students to integrate disparate ideas and develop a coherent, generative account of complex scientific topics. Using natural language processing to analyze student written work, we compare forms of automated guidance designed to motivate productive revision and help…
Automation: An Illustration of Social Change.
ERIC Educational Resources Information Center
Warnat, Winifred I.
Advanced automation is significantly affecting American society and the individual. To understand the extent of this impact, an understanding of the country's service economy is necessary. The United States made the transition from a goods- to service-based economy shortly after World War II. In 1982, services generated 67% of the Gross National…
Adaptive Automation Design and Implementation
2015-09-17
Study : Space Navigator This section demonstrates the player modeling paradigm, focusing specifically on the response generation section of the player ...human-machine system, a real-time player modeling framework for imitating a specific person’s task performance, and the Adaptive Automation System...Model . . . . . . . . . . . . . . . . . . . . . . . 13 Clustering-Based Real-Time Player Modeling . . . . . . . . . . . . . . . . . . . . . . 15 An
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco
2005-03-01
GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
Neubert, Sebastian; Göde, Bernd; Gu, Xiangyu; Stoll, Norbert; Thurow, Kerstin
2017-04-01
Modern business process management (BPM) is increasingly interesting for laboratory automation. End-to-end workflow automation and improved top-level systems integration for information technology (IT) and automation systems are especially prominent objectives. With the ISO Standard Business Process Model and Notation (BPMN) 2.X, a system-independent and interdisciplinary accepted graphical process control notation is provided, allowing process analysis, while also being executable. The transfer of BPM solutions to structured laboratory automation places novel demands, for example, concerning the real-time-critical process and systems integration. The article discusses the potential of laboratory execution systems (LESs) for an easier implementation of the business process management system (BPMS) in hierarchical laboratory automation. In particular, complex application scenarios, including long process chains based on, for example, several distributed automation islands and mobile laboratory robots for a material transport, are difficult to handle in BPMSs. The presented approach deals with the displacement of workflow control tasks into life science specialized LESs, the reduction of numerous different interfaces between BPMSs and subsystems, and the simplification of complex process modelings. Thus, the integration effort for complex laboratory workflows can be significantly reduced for strictly structured automation solutions. An example application, consisting of a mixture of manual and automated subprocesses, is demonstrated by the presented BPMS-LES approach.
NASA Technical Reports Server (NTRS)
Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam;
2009-01-01
The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.
Sarter, N B; Woods, D D
1997-12-01
Research and operational experience have shown that one of the major problems with pilot-automation interaction is a lack of mode awareness (i.e., the current and future status and behavior of the automation). As a result, pilots sometimes experience so-called automation surprises when the automation takes an unexpected action or fails to behave as anticipated. A lack of mode awareness and automation surprises can he viewed as symptoms of a mismatch between human and machine properties and capabilities. Changes in automation design can therefore he expected to affect the likelihood and nature of problems encountered by pilots. Previous studies have focused exclusively on early generation "glass cockpit" aircraft that were designed based on a similar automation philosophy. To find out whether similar difficulties with maintaining mode awareness are encountered on more advanced aircraft, a corpus of automation surprises was gathered from pilots of the Airbus A-320, an aircraft characterized by high levels of autonomy, authority, and complexity. To understand the underlying reasons for reported breakdowns in human-automation coordination, we also asked pilots about their monitoring strategies and their experiences with and attitude toward the unique design of flight controls on this aircraft.
NASA Technical Reports Server (NTRS)
Sarter, N. B.; Woods, D. D.
1997-01-01
Research and operational experience have shown that one of the major problems with pilot-automation interaction is a lack of mode awareness (i.e., the current and future status and behavior of the automation). As a result, pilots sometimes experience so-called automation surprises when the automation takes an unexpected action or fails to behave as anticipated. A lack of mode awareness and automation surprises can he viewed as symptoms of a mismatch between human and machine properties and capabilities. Changes in automation design can therefore he expected to affect the likelihood and nature of problems encountered by pilots. Previous studies have focused exclusively on early generation "glass cockpit" aircraft that were designed based on a similar automation philosophy. To find out whether similar difficulties with maintaining mode awareness are encountered on more advanced aircraft, a corpus of automation surprises was gathered from pilots of the Airbus A-320, an aircraft characterized by high levels of autonomy, authority, and complexity. To understand the underlying reasons for reported breakdowns in human-automation coordination, we also asked pilots about their monitoring strategies and their experiences with and attitude toward the unique design of flight controls on this aircraft.
Automated fiber placement: Evolution and current demonstrations
NASA Technical Reports Server (NTRS)
Grant, Carroll G.; Benson, Vernon M.
1993-01-01
The automated fiber placement process has been in development at Hercules since 1980. Fiber placement is being developed specifically for aircraft and other high performance structural applications. Several major milestones have been achieved during process development. These milestones are discussed in this paper. The automated fiber placement process is currently being demonstrated on the NASA ACT program. All demonstration projects to date have focused on fiber placement of transport aircraft fuselage structures. Hercules has worked closely with Boeing and Douglas on these demonstration projects. This paper gives a description of demonstration projects and results achieved.
A semi-automated technique for labeling and counting of apoptosing retinal cells
2014-01-01
Background Retinal ganglion cell (RGC) loss is one of the earliest and most important cellular changes in glaucoma. The DARC (Detection of Apoptosing Retinal Cells) technology enables in vivo real-time non-invasive imaging of single apoptosing retinal cells in animal models of glaucoma and Alzheimer’s disease. To date, apoptosing RGCs imaged using DARC have been counted manually. This is time-consuming, labour-intensive, vulnerable to bias, and has considerable inter- and intra-operator variability. Results A semi-automated algorithm was developed which enabled automated identification of apoptosing RGCs labeled with fluorescent Annexin-5 on DARC images. Automated analysis included a pre-processing stage involving local-luminance and local-contrast “gain control”, a “blob analysis” step to differentiate between cells, vessels and noise, and a method to exclude non-cell structures using specific combined ‘size’ and ‘aspect’ ratio criteria. Apoptosing retinal cells were counted by 3 masked operators, generating ‘Gold-standard’ mean manual cell counts, and were also counted using the newly developed automated algorithm. Comparison between automated cell counts and the mean manual cell counts on 66 DARC images showed significant correlation between the two methods (Pearson’s correlation coefficient 0.978 (p < 0.001), R Squared = 0.956. The Intraclass correlation coefficient was 0.986 (95% CI 0.977-0.991, p < 0.001), and Cronbach’s alpha measure of consistency = 0.986, confirming excellent correlation and consistency. No significant difference (p = 0.922, 95% CI: −5.53 to 6.10) was detected between the cell counts of the two methods. Conclusions The novel automated algorithm enabled accurate quantification of apoptosing RGCs that is highly comparable to manual counting, and appears to minimise operator-bias, whilst being both fast and reproducible. This may prove to be a valuable method of quantifying apoptosing retinal cells, with particular relevance to translation in the clinic, where a Phase I clinical trial of DARC in glaucoma patients is due to start shortly. PMID:24902592
Implementation of and experiences with new automation
Mahmud, Ifte; Kim, David
2000-01-01
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at ‘get-go’, we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products. PMID:18924695
Implementation of and experiences with new automation.
Mahmud, I; Kim, D
2000-01-01
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at 'get-go', we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products.
Method and apparatus for automated, modular, biomass power generation
Diebold, James P; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael; Smith, Trevor
2013-11-05
Method and apparatus for generating a low tar, renewable fuel gas from biomass and using it in other energy conversion devices, many of which were designed for use with gaseous and liquid fossil fuels. An automated, downdraft gasifier incorporates extensive air injection into the char bed to maintain the conditions that promote the destruction of residual tars. The resulting fuel gas and entrained char and ash are cooled in a special heat exchanger, and then continuously cleaned in a filter prior to usage in standalone as well as networked power systems.
Method and apparatus for automated, modular, biomass power generation
Diebold, James P [Lakewood, CO; Lilley, Arthur [Finleyville, PA; Browne, Kingsbury III [Golden, CO; Walt, Robb Ray [Aurora, CO; Duncan, Dustin [Littleton, CO; Walker, Michael [Longmont, CO; Steele, John [Aurora, CO; Fields, Michael [Arvada, CO; Smith, Trevor [Lakewood, CO
2011-03-22
Method and apparatus for generating a low tar, renewable fuel gas from biomass and using it in other energy conversion devices, many of which were designed for use with gaseous and liquid fossil fuels. An automated, downdraft gasifier incorporates extensive air injection into the char bed to maintain the conditions that promote the destruction of residual tars. The resulting fuel gas and entrained char and ash are cooled in a special heat exchanger, and then continuously cleaned in a filter prior to usage in standalone as well as networked power systems.
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Automated event generation for loop-induced processes
Hirschi, Valentin; Mattelaer, Olivier
2015-10-22
We present the first fully automated implementation of cross-section computation and event generation for loop-induced processes. This work is integrated in the MadGraph5_aMC@NLO framework. We describe the optimisations implemented at the level of the matrix element evaluation, phase space integration and event generation allowing for the simulation of large multiplicity loop-induced processes. Along with some selected differential observables, we illustrate our results with a table showing inclusive cross-sections for all loop-induced hadronic scattering processes with up to three final states in the SM as well as for some relevant 2 → 4 processes. Furthermore, many of these are computed heremore » for the first time.« less
Automated synthetic scene generation
NASA Astrophysics Data System (ADS)
Givens, Ryan N.
Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.
T-RMSD: a web server for automated fine-grained protein structural classification.
Magis, Cedrik; Di Tommaso, Paolo; Notredame, Cedric
2013-07-01
This article introduces the T-RMSD web server (tree-based on root-mean-square deviation), a service allowing the online computation of structure-based protein classification. It has been developed to address the relation between structural and functional similarity in proteins, and it allows a fine-grained structural clustering of a given protein family or group of structurally related proteins using distance RMSD (dRMSD) variations. These distances are computed between all pairs of equivalent residues, as defined by the ungapped columns within a given multiple sequence alignment. Using these generated distance matrices (one per equivalent position), T-RMSD produces a structural tree with support values for each cluster node, reminiscent of bootstrap values. These values, associated with the tree topology, allow a quantitative estimate of structural distances between proteins or group of proteins defined by the tree topology. The clusters thus defined have been shown to be structurally and functionally informative. The T-RMSD web server is a free website open to all users and available at http://tcoffee.crg.cat/apps/tcoffee/do:trmsd.
T-RMSD: a web server for automated fine-grained protein structural classification
Magis, Cedrik; Di Tommaso, Paolo; Notredame, Cedric
2013-01-01
This article introduces the T-RMSD web server (tree-based on root-mean-square deviation), a service allowing the online computation of structure-based protein classification. It has been developed to address the relation between structural and functional similarity in proteins, and it allows a fine-grained structural clustering of a given protein family or group of structurally related proteins using distance RMSD (dRMSD) variations. These distances are computed between all pairs of equivalent residues, as defined by the ungapped columns within a given multiple sequence alignment. Using these generated distance matrices (one per equivalent position), T-RMSD produces a structural tree with support values for each cluster node, reminiscent of bootstrap values. These values, associated with the tree topology, allow a quantitative estimate of structural distances between proteins or group of proteins defined by the tree topology. The clusters thus defined have been shown to be structurally and functionally informative. The T-RMSD web server is a free website open to all users and available at http://tcoffee.crg.cat/apps/tcoffee/do:trmsd. PMID:23716642
NASA Technical Reports Server (NTRS)
Stuart, Jeffrey; Howell, Kathleen; Wilson, Roby
2013-01-01
The Sun-Jupiter Trojan asteroids are celestial bodies of great scientific interest as well as potential resources offering water and other mineral resources for longterm human exploration of the solar system. Previous investigations under this project have addressed the automated design of tours within the asteroid swarm. This investigation expands the current automation scheme by incorporating options for a complete trajectory design approach to the Trojan asteroids. Computational aspects of the design procedure are automated such that end-to-end trajectories are generated with a minimum of human interaction after key elements and constraints associated with a proposed mission concept are specified.
Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.
Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H
2009-01-01
Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Measuring Up: Implementing a Dental Quality Measure in the Electronic Health Record Context
Bhardwaj, Aarti; Ramoni, Rachel; Kalenderian, Elsbeth; Neumann, Ana; Hebballi, Nutan B; White, Joel M; McClellan, Lyle; Walji, Muhammad F
2015-01-01
Background Quality improvement requires quality measures that are validly implementable. In this work, we assessed the feasibility and performance of an automated electronic Meaningful Use dental clinical quality measure (percentage of children who received fluoride varnish). Methods We defined how to implement the automated measure queries in a dental electronic health record (EHR). Within records identified through automated query, we manually reviewed a subsample to assess the performance of the query. Results The automated query found 71.0% of patients to have had fluoride varnish compared to 77.6% found using the manual chart review. The automated quality measure performance was 90.5% sensitivity, 90.8% specificity, 96.9% positive predictive value, and 75.2% negative predictive value. Conclusions Our findings support the feasibility of automated dental quality measure queries in the context of sufficient structured data. Information noted only in the free text rather than in structured data would require natural language processing approaches to effectively query. Practical Implications To participate in self-directed quality improvement, dental clinicians must embrace the accountability era. Commitment to quality will require enhanced documentation in order to support near-term automated calculation of quality measures. PMID:26562736
An ODE-Based Wall Model for Turbulent Flow Simulations
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Aftosmis, Michael J.
2017-01-01
Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.
Development of design principles for automated systems in transport control.
Balfe, Nora; Wilson, John R; Sharples, Sarah; Clarke, Theresa
2012-01-01
This article reports the results of a qualitative study investigating attitudes towards and opinions of an advanced automation system currently used in UK rail signalling. In-depth interviews were held with 10 users, key issues associated with automation were identified and the automation's impact on the signalling task investigated. The interview data highlighted the importance of the signallers' understanding of the automation and their (in)ability to predict its outputs. The interviews also covered the methods used by signallers to interact with and control the automation, and the perceived effects on their workload. The results indicate that despite a generally low level of understanding and ability to predict the actions of the automation system, signallers have developed largely successful coping mechanisms that enable them to use the technology effectively. These findings, along with parallel work identifying desirable attributes of automation from the literature in the area, were used to develop 12 principles of automation which can be used to help design new systems which better facilitate cooperative working. The work reported in this article was completed with the active involvement of operational rail staff who regularly use automated systems in rail signalling. The outcomes are currently being used to inform decisions on the extent and type of automation and user interfaces in future generations of rail control systems.
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.
2017-08-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G
2017-07-06
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
Using Automated Scores of Student Essays to Support Teacher Guidance in Classroom Inquiry
NASA Astrophysics Data System (ADS)
Gerard, Libby F.; Linn, Marcia C.
2016-02-01
Computer scoring of student written essays about an inquiry topic can be used to diagnose student progress both to alert teachers to struggling students and to generate automated guidance. We identify promising ways for teachers to add value to automated guidance to improve student learning. Three teachers from two schools and their 386 students participated. We draw on evidence from student progress, observations of how teachers interact with students, and reactions of teachers. The findings suggest that alerts for teachers prompted rich teacher-student conversations about energy in photosynthesis. In one school, the combination of the automated guidance plus teacher guidance was more effective for student science learning than two rounds of personalized, automated guidance. In the other school, both approaches resulted in equal learning gains. These findings suggest optimal combinations of automated guidance and teacher guidance to support students to revise explanations during inquiry and build integrated understanding of science.
Development of an automated fuzing station for the future armored resupply vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chesser, J.B.; Jansen, J.F.; Lloyd, P.D.
1995-03-01
The US Army is developing the Advanced Field Artillery System (SGSD), a next generation armored howitzer. The Future Armored Resupply Vehicle (FARV) will be its companion ammunition resupply vehicle. The FARV with automate the supply of ammunition and fuel to the AFAS which will increase capabilities over the current system. One of the functions being considered for automation is ammunition processing. Oak Ridge National Laboratory is developing equipment to demonstrate automated ammunition processing. One of the key operations to be automated is fuzing. The projectiles are initially unfuzed, and a fuze must be inserted and threaded into the projectile asmore » part of the processing. A constraint on the design solution is that the ammunition cannot be modified to simplify automation. The problem was analyzed to determine the alignment requirements. Using the results of the analysis, ORNL designed, built, and tested a test stand to verify the selected design solution.« less
Polonchuk, Liudmila
2014-01-01
Patch-clamping is a powerful technique for investigating the ion channel function and regulation. However, its low throughput hampered profiling of large compound series in early drug development. Fortunately, automation has revolutionized the area of experimental electrophysiology over the past decade. Whereas the first automated patch-clamp instruments using the planar patch-clamp technology demonstrated rather a moderate throughput, few second-generation automated platforms recently launched by various companies have significantly increased ability to form a high number of high-resistance seals. Among them is SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich, Germany), a fully automated giga-seal patch-clamp system with the highest throughput on the market. By recording from up to 96 cells simultaneously, the SyncroPatch(®) 96 allows to substantially increase throughput without compromising data quality. This chapter describes features of the innovative automated electrophysiology system and protocols used for a successful transfer of the established hERG assay to this high-throughput automated platform.
A Santos, Jose C; Nassif, Houssam; Page, David; Muggleton, Stephen H; E Sternberg, Michael J
2012-07-11
There is a need for automated methods to learn general features of the interactions of a ligand class with its diverse set of protein receptors. An appropriate machine learning approach is Inductive Logic Programming (ILP), which automatically generates comprehensible rules in addition to prediction. The development of ILP systems which can learn rules of the complexity required for studies on protein structure remains a challenge. In this work we use a new ILP system, ProGolem, and demonstrate its performance on learning features of hexose-protein interactions. The rules induced by ProGolem detect interactions mediated by aromatics and by planar-polar residues, in addition to less common features such as the aromatic sandwich. The rules also reveal a previously unreported dependency for residues cys and leu. They also specify interactions involving aromatic and hydrogen bonding residues. This paper shows that Inductive Logic Programming implemented in ProGolem can derive rules giving structural features of protein/ligand interactions. Several of these rules are consistent with descriptions in the literature. In addition to confirming literature results, ProGolem's model has a 10-fold cross-validated predictive accuracy that is superior, at the 95% confidence level, to another ILP system previously used to study protein/hexose interactions and is comparable with state-of-the-art statistical learners.
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.
2012-03-01
Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.
Mühlebach, Anneke; Adam, Joachim; Schön, Uwe
2011-11-01
Automated medicinal chemistry (parallel chemistry) has become an integral part of the drug-discovery process in almost every large pharmaceutical company. Parallel array synthesis of individual organic compounds has been used extensively to generate diverse structural libraries to support different phases of the drug-discovery process, such as hit-to-lead, lead finding, or lead optimization. In order to guarantee effective project support, efficiency in the production of compound libraries has been maximized. As a consequence, also throughput in chromatographic purification and analysis has been adapted. As a recent trend, more laboratories are preparing smaller, yet more focused libraries with even increasing demands towards quality, i.e. optimal purity and unambiguous confirmation of identity. This paper presents an automated approach how to combine effective purification and structural conformation of a lead optimization library created by microwave-assisted organic synthesis. The results of complementary analytical techniques such as UHPLC-HRMS and NMR are not only regarded but even merged for fast and easy decision making, providing optimal quality of compound stock. In comparison with the previous procedures, throughput times are at least four times faster, while compound consumption could be decreased more than threefold. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automated first-principles mapping for phase-change materials.
Esser, Marc; Maintz, Stefan; Dronskowski, Richard
2017-04-05
Plotting materials on bi-coordinate maps according to physically meaningful descriptors has a successful tradition in computational solid-state science spanning more than four decades. Equipped with new ab initio techniques introduced in this work, we generate an improved version of the treasure map for phase-change materials (PCMs) as introduced previously by Lencer et al. which, other than before, charts all industrially used PCMs correctly. Furthermore, we suggest seven new PCM candidates, namely SiSb 4 Te 7 , Si 2 Sb 2 Te 5 , SiAs 2 Te 4 , PbAs 2 Te 4 , SiSb 2 Te 4 , Sn 2 As 2 Te 5 , and PbAs 4 Te 7 , to be used as synthetic targets. To realize aforementioned maps based on orbital mixing (or "hybridization") and ionicity coordinates, structural information was first included into an ab initio numerical descriptor for sp 3 orbital mixing and then generalized beyond high-symmetry structures. In addition, a simple, yet powerful quantum-mechanical ionization measure also including structural information was introduced. Taken together, these tools allow for (automatically) generating materials maps solely relying on first-principles calculations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Automated data collection based on RoboDiff at the ESRF beamline MASSIF-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurizzo, Didier, E-mail: Didier.nurizzo@esrf.fr; Guichard, Nicolas; McSweeney, Sean
2016-07-27
The European Synchrotron Radiation Facility has a long standing history in the automation of experiments in Macromolecular Crystallography. MASSIF-1 (Massively Automated Sample Screening and evaluation Integrated Facility), a beamline constructed as part of the ESRF Upgrade Phase I program, has been open to the external user community since July 2014 and offers a unique completely automated data collection service to both academic and industrial structural biologists.
SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.
Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga
2013-01-01
High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.
NASA Astrophysics Data System (ADS)
Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.
1993-07-01
We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.
NASA Astrophysics Data System (ADS)
Chisholm, Bret J.; Webster, Dean C.; Bennett, James C.; Berry, Missy; Christianson, David; Kim, Jongsoo; Mayo, Bret; Gubbins, Nathan
2007-07-01
An automated, high-throughput adhesion workflow that enables pseudobarnacle adhesion and coating/substrate adhesion to be measured on coating patches arranged in an array format on 4×8in.2 panels was developed. The adhesion workflow consists of the following process steps: (1) application of an adhesive to the coating array; (2) insertion of panels into a clamping device; (3) insertion of aluminum studs into the clamping device and onto coating surfaces, aligned with the adhesive; (4) curing of the adhesive; and (5) automated removal of the aluminum studs. Validation experiments comparing data generated using the automated, high-throughput workflow to data obtained using conventional, manual methods showed that the automated system allows for accurate ranking of relative coating adhesion performance.
An Empirical Evaluation of Automated Theorem Provers in Software Certification
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann
2004-01-01
We describe a system for the automated certification of safety properties of NASA software. The system uses Hoare-style program verification technology to generate proof obligations which are then processed by an automated first-order theorem prover (ATP). We discuss the unique requirements this application places on the ATPs, focusing on automation, proof checking, and usability. For full automation, however, the obligations must be aggressively preprocessed and simplified, and we demonstrate how the individual simplification stages, which are implemented by rewriting, influence the ability of the ATPs to solve the proof tasks. Our results are based on 13 certification experiments that lead to more than 25,000 proof tasks which have each been attempted by Vampire, Spass, e-setheo, and Otter. The proofs found by Otter have been proof-checked by IVY.
Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline
ERIC Educational Resources Information Center
Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.
2012-01-01
The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…
Use of Automated Scoring Features to Generate Hypotheses Regarding Language-Based DIF
ERIC Educational Resources Information Center
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent
2017-01-01
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Automated Formative Assessment as a Tool to Scaffold Student Documentary Writing
ERIC Educational Resources Information Center
Ferster, Bill; Hammond, Thomas C.; Alexander, R. Curby; Lyman, Hunt
2012-01-01
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Virtual reality for intelligent and interactive operating, training, and visualization systems
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Schluse, Michael
2000-10-01
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
NASA Astrophysics Data System (ADS)
Kelly, Jamie S.; Bowman, Hiroshi C.; Rao, Vittal S.; Pottinger, Hardy J.
1997-06-01
Implementation issues represent an unfamiliar challenge to most control engineers, and many techniques for controller design ignore these issues outright. Consequently, the design of controllers for smart structural systems usually proceeds without regard for their eventual implementation, thus resulting either in serious performance degradation or in hardware requirements that squander power, complicate integration, and drive up cost. The level of integration assumed by the Smart Patch further exacerbates these difficulties, and any design inefficiency may render the realization of a single-package sensor-controller-actuator system infeasible. The goal of this research is to automate the controller implementation process and to relieve the design engineer of implementation concerns like quantization, computational efficiency, and device selection. We specifically target Field Programmable Gate Arrays (FPGAs) as our hardware platform because these devices are highly flexible, power efficient, and reprogrammable. The current study develops an automated implementation sequence that minimizes hardware requirements while maintaining controller performance. Beginning with a state space representation of the controller, the sequence automatically generates a configuration bitstream for a suitable FPGA implementation. MATLAB functions optimize and simulate the control algorithm before translating it into the VHSIC hardware description language. These functions improve power efficiency and simplify integration in the final implementation by performing a linear transformation that renders the controller computationally friendly. The transformation favors sparse matrices in order to reduce multiply operations and the hardware necessary to support them; simultaneously, the remaining matrix elements take on values that minimize limit cycles and parameter sensitivity. The proposed controller design methodology is implemented on a simple cantilever beam test structure using FPGA hardware. The experimental closed loop response is compared with that of an automated FPGA controller implementation. Finally, we explore the integration of FPGA based controllers into a multi-chip module, which we believe represents the next step towards the realization of the Smart Patch.
Laser light-section sensor automating the production of textile-reinforced composites
NASA Astrophysics Data System (ADS)
Schmitt, R.; Niggemann, C.; Mersmann, C.
2009-05-01
Due to their advanced weight-specific mechanical properties, the application of fibre-reinforced plastics (FRP) has been established as a key technology in several engineering areas. Textile-based reinforcement structures (Preform) in particular achieve a high structural integrity due to the multi-dimensional build-up of dry-fibre layers combined with 3D-sewing and further textile processes. The final composite parts provide enhanced damage tolerances through excellent crash-energy absorbing characteristics. For these reasons, structural parts (e.g. frame) will be integrated in next generation airplanes. However, many manufacturing processes for FRP are still involving manual production steps without integrated quality control. The non-automated production implies considerable process dispersion and a high rework rate. Before the final inspection there is no reliable information about the production status. This work sets metrology as the key to automation and thus an economically feasible production, applying a laser light-section sensor system (LLSS) to measure process quality and feed back the results to close control loops of the production system. The developed method derives 3D-measurements from height profiles acquired by the LLSS. To assure the textile's quality a full surface scan is conducted, detecting defects or misalignment by comparing the measurement results with a CAD model of the lay-up. The method focuses on signal processing of the height profiles to ensure a sub-pixel accuracy using a novel algorithm based on a non-linear least-square fitting to a set of sigmoid functions. To compare the measured surface points to the CAD model, material characteristics are incorporated into the method. This ensures that only the fibre layer of the textile's surface is included and gaps between the fibres or overlaying seams are neglected. Finally, determining the uncertainty in measurement according to the GUM-standard proofed the sensor system's accuracy. First tests under industrial conditions showed that applying this sensor after the drapery of each textile layer reduces the scrap quota by approximately 30%.
Self-organizing ontology of biochemically relevant small molecules
2012-01-01
Background The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI) are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest. Results To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development. Conclusions We conclude that the proposed methodology can ease the burden of chemical data annotators and dramatically increase their productivity. We anticipate that the use of formal logic in our proposed framework will make chemical classification criteria more transparent to humans and machines alike and will thus facilitate predictive and integrative bioactivity model development. PMID:22221313
Community structure in networks
NASA Astrophysics Data System (ADS)
Newman, Mark
2004-03-01
Many networked systems, including physical, biological, social, and technological networks, appear to contain ``communities'' -- groups of nodes within which connections are dense, but between which they are sparser. The ability to find such communities in an automated fashion could be of considerable use. Communities in a web graph for instance might correspond to sets of web sites dealing with related topics, while communities in a biochemical network or an electronic circuit might correspond to functional units of some kind. We present a number of new methods for community discovery, including methods based on ``betweenness'' measures and methods based on modularity optimization. We also give examples of applications of these methods to both computer-generated and real-world network data, and show how our techniques can be used to shed light on the sometimes dauntingly complex structure of networked systems.
Zhou, Hongyi; Skolnick, Jeffrey
2009-01-01
In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783
Automated structure determination of proteins with the SAIL-FLYA NMR method.
Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune
2007-01-01
The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.
NASA Astrophysics Data System (ADS)
Panella, F.; Boehm, J.; Loo, Y.; Kaushik, A.; Gonzalez, D.
2018-05-01
This work presents the combination of Deep-Learning (DL) and image processing to produce an automated cracks recognition and defect measurement tool for civil structures. The authors focus on tunnel civil structures and survey and have developed an end to end tool for asset management of underground structures. In order to maintain the serviceability of tunnels, regular inspection is needed to assess their structural status. The traditional method of carrying out the survey is the visual inspection: simple, but slow and relatively expensive and the quality of the output depends on the ability and experience of the engineer as well as on the total workload (stress and tiredness may influence the ability to observe and record information). As a result of these issues, in the last decade there is the desire to automate the monitoring using new methods of inspection. The present paper has the goal of combining DL with traditional image processing to create a tool able to detect, locate and measure the structural defect.
Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E
2012-11-20
The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources.
2012-01-01
The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231
I-TASSER: fully automated protein structure prediction in CASP8.
Zhang, Yang
2009-01-01
The I-TASSER algorithm for 3D protein structure prediction was tested in CASP8, with the procedure fully automated in both the Server and Human sections. The quality of the server models is close to that of human ones but the human predictions incorporate more diverse templates from other servers which improve the human predictions in some of the distant homology targets. For the first time, the sequence-based contact predictions from machine learning techniques are found helpful for both template-based modeling (TBM) and template-free modeling (FM). In TBM, although the accuracy of the sequence based contact predictions is on average lower than that from template-based ones, the novel contacts in the sequence-based predictions, which are complementary to the threading templates in the weakly or unaligned regions, are important to improve the global and local packing in these regions. Moreover, the newly developed atomic structural refinement algorithm was tested in CASP8 and found to improve the hydrogen-bonding networks and the overall TM-score, which is mainly due to its ability of removing steric clashes so that the models can be generated from cluster centroids. Nevertheless, one of the major issues of the I-TASSER pipeline is the model selection where the best models could not be appropriately recognized when the correct templates are detected only by the minority of the threading algorithms. There are also problems related with domain-splitting and mirror image recognition which mainly influences the performance of I-TASSER modeling in the FM-based structure predictions. Copyright 2009 Wiley-Liss, Inc.
Amith, Muhammad; Cunningham, Rachel; Savas, Lara S; Boom, Julie; Schvaneveldt, Roger; Tao, Cui; Cohen, Trevor
2017-10-01
This study demonstrates the use of distributed vector representations and Pathfinder Network Scaling (PFNETS) to represent online vaccine content created by health experts and by laypeople. By analyzing a target audience's conceptualization of a topic, domain experts can develop targeted interventions to improve the basic health knowledge of consumers. The underlying assumption is that the content created by different groups reflects the mental organization of their knowledge. Applying automated text analysis to this content may elucidate differences between the knowledge structures of laypeople (heath consumers) and professionals (health experts). This paper utilizes vaccine information generated by laypeople and health experts to investigate the utility of this approach. We used an established technique from cognitive psychology, Pathfinder Network Scaling to infer the structure of the associational networks between concepts learned from online content using methods of distributional semantics. In doing so, we extend the original application of PFNETS to infer knowledge structures from individual participants, to infer the prevailing knowledge structures within communities of content authors. The resulting graphs reveal opportunities for public health and vaccination education experts to improve communication and intervention efforts directed towards health consumers. Our efforts demonstrate the feasibility of using an automated procedure to examine the manifestation of conceptual models within large bodies of free text, revealing evidence of conflicting understanding of vaccine concepts among health consumers as compared with health experts. Additionally, this study provides insight into the differences between consumer and expert abstraction of domain knowledge, revealing vaccine-related knowledge gaps that suggest opportunities to improve provider-patient communication. Copyright © 2017 Elsevier Inc. All rights reserved.
Zeng, G; Murphy, J; Annis, S-L; Wu, X; Wang, Y; McGowan, T; Macpherson, M
2012-07-01
To report a quality control program in prostate radiation therapy at our center that includes semi-automated planning process to generate high quality plans and in-house software to track plan quality in the subsequent clinical application. Arc planning in Eclipse v10.0 was preformed for both intact prostate and post-prostatectomy treatments. The planning focuses on DVH requirements and dose distributions being able to tolerate daily setup variations. A modified structure set is used to standardize the optimization, including short rectum and bladder in the fields to effectively tighten dose to target and a rectum expansion with 1cm cropped from PTV to block dose and shape posterior isodose lines. Structure, plan and optimization templates are used to streamline plan generation. DVH files are exported from Eclipse to a quality tracking software with GUI written in Matlab that can report the dose-volume data either for an individual patient or over a patient population. For 100 intact prostate patients treated with 78Gy, rectal D50, D25, D15 and D5 are 30.1±6.2Gy, 50.6±7.9Gy, 65.9±6.0Gy and 76.6±1.4Gy respectively, well below the limits 50Gy, 65Gy, 75Gy and 78Gy respectively. For prostate bed with prescription of 66Gy, rectal D50 is 35.9±6.9Gy. In both sites, PTV is covered by 95% prescription and the hotspots are less than 5%. The semi-automated planning method can efficiently create high quality plans while the tracking software can monitor the feedback from clinical application. It is a comprehensive and robust quality control program in radiation therapy. © 2012 American Association of Physicists in Medicine.
Signorini, Marcelo; Costa, Magdalena; Teitelbaum, David; Restovich, Viviana; Brasesco, Hebe; García, Diego; Superno, Valeria; Petroli, Sandra; Bruzzone, Mariana; Arduini, Victor; Vanzini, Mónica; Sucari, Adriana; Suberbie, Germán; Maricel, Turina; Rodríguez, Ricardo; Leotta, Gerardo A
2018-08-01
In Argentina, Shiga toxin producing Escherichia coli (STEC) serogroups O157, O26, O103, O111, O145 and O121 are adulterant in ground beef. In other countries, the zero-tolerance approach to all STEC is implemented for chilled beef. Argentinean abattoirs are interested in implementing effective interventions against STEC on carcasses. Pre-rigor beef carcasses were used to determine whether nine antimicrobial strategies effectively reduced aerobic plate, coliform and E. coli counts and stx and eae gene prevalence. These strategies were: citric acid (2%; automated), acetic acid (2%; manual and automated), lactic acid (LA 2%; manual and automated), LA (3%; automated), electrolytically-generated hypochlorous acid (400 ppm; manual), hot water (82 °C; automated) and INSPEXX (0.2%; automated). Automated application of 2% LA after 30-60-min aeration and 3% LA at 55 °C were the most effective interventions. Automated application was more effective than manual application. Decontamination of beef carcasses through automated application of lactic acid and hot water would reduce public health risks associated with STEC contamination. Copyright © 2018 Elsevier Ltd. All rights reserved.
A robust automated system elucidates mouse home cage behavioral structure
Goulding, Evan H.; Schenk, A. Katrin; Juneja, Punita; MacKay, Adrienne W.; Wade, Jennifer M.; Tecott, Laurence H.
2008-01-01
Patterns of behavior exhibited by mice in their home cages reflect the function and interaction of numerous behavioral and physiological systems. Detailed assessment of these patterns thus has the potential to provide a powerful tool for understanding basic aspects of behavioral regulation and their perturbation by disease processes. However, the capacity to identify and examine these patterns in terms of their discrete levels of organization across diverse behaviors has been difficult to achieve and automate. Here, we describe an automated approach for the quantitative characterization of fundamental behavioral elements and their patterns in the freely behaving mouse. We demonstrate the utility of this approach by identifying unique features of home cage behavioral structure and changes in distinct levels of behavioral organization in mice with single gene mutations altering energy balance. The robust, automated, reproducible quantification of mouse home cage behavioral structure detailed here should have wide applicability for the study of mammalian physiology, behavior, and disease. PMID:19106295
Development and verification testing of automation and robotics for assembly of space structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1993-01-01
A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.
On the Relation between Automated Essay Scoring and Modern Views of the Writing Construct
ERIC Educational Resources Information Center
Deane, Paul
2013-01-01
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
Sampling probe for microarray read out using electrospray mass spectrometry
Van Berkel, Gary J.
2004-10-12
An automated electrospray based sampling system and method for analysis obtains samples from surface array spots having analytes. The system includes at least one probe, the probe including an inlet for flowing at least one eluting solvent to respective ones of a plurality of spots and an outlet for directing the analyte away from the spots. An automatic positioning system is provided for translating the probe relative to the spots to permit sampling of any spot. An electrospray ion source having an input fluidicly connected to the probe receives the analyte and generates ions from the analyte. The ion source provides the generated ions to a structure for analysis to identify the analyte, preferably being a mass spectrometer. The probe can be a surface contact probe, where the probe forms an enclosing seal along the periphery of the array spot surface.
A smart end-effector for assembly of space truss structures
NASA Technical Reports Server (NTRS)
Doggett, William R.; Rhodes, Marvin D.; Wise, Marion A.; Armistead, Maurice F.
1992-01-01
A unique facility, the Automated Structures Research Laboratory, is being used to investigate robotic assembly of truss structures. A special-purpose end-effector is used to assemble structural elements into an eight meter diameter structure. To expand the capabilities of the facility to include construction of structures with curved surfaces from straight structural elements of different lengths, a new end-effector has been designed and fabricated. This end-effector contains an integrated microprocessor to monitor actuator operations through sensor feedback. This paper provides an overview of the automated assembly tasks required by this end-effector and a description of the new end-effector's hardware and control software.
NASA Astrophysics Data System (ADS)
Girolamo, D.; Girolamo, L.; Yuan, F. G.
2015-03-01
Nondestructive evaluation (NDE) for detection and quantification of damage in composite materials is fundamental in the assessment of the overall structural integrity of modern aerospace systems. Conventional NDE systems have been extensively used to detect the location and size of damages by propagating ultrasonic waves normal to the surface. However they usually require physical contact with the structure and are time consuming and labor intensive. An automated, contactless laser ultrasonic imaging system for barely visible impact damage (BVID) detection in advanced composite structures has been developed to overcome these limitations. Lamb waves are generated by a Q-switched Nd:YAG laser, raster scanned by a set of galvano-mirrors over the damaged area. The out-of-plane vibrations are measured through a laser Doppler Vibrometer (LDV) that is stationary at a point on the corner of the grid. The ultrasonic wave field of the scanned area is reconstructed in polar coordinates and analyzed for high resolution characterization of impact damage in the composite honeycomb panel. Two methodologies are used for ultrasonic wave-field analysis: scattered wave field analysis (SWA) and standing wave energy analysis (SWEA) in the frequency domain. The SWA is employed for processing the wave field and estimate spatially dependent wavenumber values, related to discontinuities in the structural domain. The SWEA algorithm extracts standing waves trapped within damaged areas and, by studying the spectrum of the standing wave field, returns high fidelity damage imaging. While the SWA can be used to locate the impact damage in the honeycomb panel, the SWEA produces damage images in good agreement with X-ray computed tomographic (X-ray CT) scans. The results obtained prove that the laser-based nondestructive system is an effective alternative to overcome limitations of conventional NDI technologies.
Automated global structure extraction for effective local building block processing in XCS.
Butz, Martin V; Pelikan, Martin; Llorà, Xavier; Goldberg, David E
2006-01-01
Learning Classifier Systems (LCSs), such as the accuracy-based XCS, evolve distributed problem solutions represented by a population of rules. During evolution, features are specialized, propagated, and recombined to provide increasingly accurate subsolutions. Recently, it was shown that, as in conventional genetic algorithms (GAs), some problems require efficient processing of subsets of features to find problem solutions efficiently. In such problems, standard variation operators of genetic and evolutionary algorithms used in LCSs suffer from potential disruption of groups of interacting features, resulting in poor performance. This paper introduces efficient crossover operators to XCS by incorporating techniques derived from competent GAs: the extended compact GA (ECGA) and the Bayesian optimization algorithm (BOA). Instead of simple crossover operators such as uniform crossover or one-point crossover, ECGA or BOA-derived mechanisms are used to build a probabilistic model of the global population and to generate offspring classifiers locally using the model. Several offspring generation variations are introduced and evaluated. The results show that it is possible to achieve performance similar to runs with an informed crossover operator that is specifically designed to yield ideal problem-dependent exploration, exploiting provided problem structure information. Thus, we create the first competent LCSs, XCS/ECGA and XCS/BOA, that detect dependency structures online and propagate corresponding lower-level dependency structures effectively without any information about these structures given in advance.
StructRNAfinder: an automated pipeline and web server for RNA families prediction.
Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius
2018-02-17
The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.
Automated image segmentation using support vector machines
NASA Astrophysics Data System (ADS)
Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.
2007-03-01
Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.
Cetinceviz, Yucel; Bayindir, Ramazan
2012-05-01
The network requirements of control systems in industrial applications increase day by day. The Internet based control system and various fieldbus systems have been designed in order to meet these requirements. This paper describes an Internet based control system with wireless fieldbus communication designed for distributed processes. The system was implemented as an experimental setup in a laboratory. In industrial facilities, the process control layer and the distance connection of the distributed control devices in the lowest levels of the industrial production environment are provided with fieldbus networks. In this paper, the Internet based control system that will be able to meet the system requirements with a new-generation communication structure, which is called wired/wireless hybrid system, has been designed on field level and carried out to cover all sectors of distributed automation, from process control, to distributed input/output (I/O). The system has been accomplished by hardware structure with a programmable logic controller (PLC), a communication processor (CP) module, two industrial wireless modules and a distributed I/O module, Motor Protection Package (MPP) and software structure with WinCC flexible program used for the screen of Scada (Supervisory Control And Data Acquisition), SIMATIC MANAGER package program ("STEP7") used for the hardware and network configuration and also for downloading control program to PLC. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
TRAP: automated classification, quantification and annotation of tandemly repeated sequences.
Sobreira, Tiago José P; Durham, Alan M; Gruber, Arthur
2006-02-01
TRAP, the Tandem Repeats Analysis Program, is a Perl program that provides a unified set of analyses for the selection, classification, quantification and automated annotation of tandemly repeated sequences. TRAP uses the results of the Tandem Repeats Finder program to perform a global analysis of the satellite content of DNA sequences, permitting researchers to easily assess the tandem repeat content for both individual sequences and whole genomes. The results can be generated in convenient formats such as HTML and comma-separated values. TRAP can also be used to automatically generate annotation data in the format of feature table and GFF files.
Next generation platforms for high-throughput biodosimetry
Repin, Mikhail; Turner, Helen C.; Garty, Guy; Brenner, David J.
2014-01-01
Here the general concept of the combined use of plates and tubes in racks compatible with the American National Standards Institute/the Society for Laboratory Automation and Screening microplate formats as the next generation platforms for increasing the throughput of biodosimetry assays was described. These platforms can be used at different stages of biodosimetry assays starting from blood collection into microtubes organised in standardised racks and ending with the cytogenetic analysis of samples in standardised multiwell and multichannel plates. Robotically friendly platforms can be used for different biodosimetry assays in minimally equipped laboratories and on cost-effective automated universal biotech systems. PMID:24837249
NASA Technical Reports Server (NTRS)
Milner, E. J.; Krosel, S. M.
1977-01-01
Techniques are presented for determining the elements of the A, B, C, and D state variable matrices for systems simulated on an EAI Pacer 100 hybrid computer. An automated procedure systematically generates disturbance data necessary to linearize the simulation model and stores these data on a floppy disk. A separate digital program verifies this data, calculates the elements of the system matrices, and prints these matrices appropriately labeled. The partial derivatives forming the elements of the state variable matrices are approximated by finite difference calculations.
1992-10-01
Prototyping with Application Generators: Lessons Learned from the Naval Aviation Logistics Command Management Information System Case. This study... management information system to automate manual Naval aviation maintenance tasks-NALCOMIS. With the use of a fourth-generation programming language
Martínez, José Mario; Martínez, Leandro
2003-05-01
Molecular Dynamics is a powerful methodology for the comprehension at molecular level of many chemical and biochemical systems. The theories and techniques developed for structural and thermodynamic analyses are well established, and many software packages are available. However, designing starting configurations for dynamics can be cumbersome. Easily generated regular lattices can be used when simple liquids or mixtures are studied. However, for complex mixtures, polymer solutions or solid adsorbed liquids (for example) this approach is inefficient, and it turns out to be very hard to obtain an adequate coordinate file. In this article, the problem of obtaining an adequate initial configuration is treated as a "packing" problem and solved by an optimization procedure. The initial configuration is chosen in such a way that the minimum distance between atoms of different molecules is greater than a fixed tolerance. The optimization uses a well-known algorithm for box-constrained minimization. Applications are given for biomolecule solvation, many-component mixtures, and interfaces. This approach can reduce the work of designing starting configurations from days or weeks to few minutes or hours, in an automated fashion. Packing optimization is also shown to be a powerful methodology for space search in docking of small ligands to proteins. This is demonstrated by docking of the thyroid hormone to its nuclear receptor. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 819-825, 2003
NASA Astrophysics Data System (ADS)
Brocks, Sebastian; Bendig, Juliane; Bareth, Georg
2016-10-01
Crop surface models (CSMs) representing plant height above ground level are a useful tool for monitoring in-field crop growth variability and enabling precision agriculture applications. A semiautomated system for generating CSMs was implemented. It combines an Android application running on a set of smart cameras for image acquisition and transmission and a set of Python scripts automating the structure-from-motion (SfM) software package Agisoft Photoscan and ArcGIS. Only ground-control-point (GCP) marking was performed manually. This system was set up on a barley field experiment with nine different barley cultivars in the growing period of 2014. Images were acquired three times a day for a period of two months. CSMs were successfully generated for 95 out of 98 acquisitions between May 2 and June 30. The best linear regressions of the CSM-derived plot-wise averaged plant-heights compared to manual plant height measurements taken at four dates resulted in a coefficient of determination R2 of 0.87 and a root-mean-square error (RMSE) of 0.08 m, with Willmott's refined index of model performance dr equaling 0.78. In total, 103 mean plot heights were used in the regression based on the noon acquisition time. The presented system succeeded in semiautomatedly monitoring crop height on a plot scale to field scale.
Automated sizing of large structures by mixed optimization methods
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1973-01-01
A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.
NASA Technical Reports Server (NTRS)
1979-01-01
The performance, design and verification requirements for the space Construction Automated Fabrication Experiment (SCAFE) are defined. The SCAFE program defines, develops, and demonstrates the techniques, processes, and equipment required for the automatic fabrication of structural elements in space and for the assembly of such elements into a large, lightweight structure. The program defines a large structural platform to be constructed in orbit using the space shuttle as a launch vehicle and construction base.
Advanced E-O test capability for Army Next-Generation Automated Test System (NGATS)
NASA Astrophysics Data System (ADS)
Errea, S.; Grigor, J.; King, D. F.; Matis, G.; McHugh, S.; McKechnie, J.; Nehring, B.
2015-05-01
The Future E-O (FEO) program was established to develop a flexible, modular, automated test capability as part of the Next Generation Automatic Test System (NGATS) program to support the test and diagnostic needs of currently fielded U.S. Army electro-optical (E-O) devices, as well as being expandable to address the requirements of future Navy, Marine Corps and Air Force E-O systems. Santa Barbara infrared (SBIR) has designed, fabricated, and delivered three (3) prototype FEO for engineering and logistics evaluation prior to anticipated full-scale production beginning in 2016. In addition to presenting a detailed overview of the FEO system hardware design, features and testing capabilities, the integration of SBIR's EO-IR sensor and laser test software package, IRWindows 4™, into FEO to automate the test execution, data collection and analysis, archiving and reporting of results is also described.
Automation of NLO processes and decays and POWHEG matching in WHIZARD
NASA Astrophysics Data System (ADS)
Reuter, Jürgen; Chokoufé, Bijan; Hoang, André; Kilian, Wolfgang; Stahlhofen, Maximilian; Teubner, Thomas; Weiss, Christian
2016-10-01
We give a status report on the automation of next-to-leading order processes within the Monte Carlo event generator WHIZARD, using GoSam and OpenLoops as provider for one- loop matrix elements. To deal with divergences, WHIZARD uses automated FKS subtraction, and the phase space for singular regions is generated automatically. NLO examples for both scattering and decay processes with a focus on e+ e- processes are shown. Also, first NLO- studies of observables for collisions of polarized leptons beams, e.g. at the ILC, will be presented. Furthermore, the automatic matching of the fixed-order NLO amplitudes with emissions from the parton shower within the Powheg formalism inside WHIZARD will be discussed. We also present results for top pairs at threshold in lepton collisions, including matching between a resummed threshold calculation and fixed-order NLO. This allows the investigation of more exclusive differential observables.
Lange, Paul P; James, Keith
2012-10-08
A novel methodology for the synthesis of druglike heterocycle libraries has been developed through the use of flow reactor technology. The strategy employs orthogonal modification of a heterocyclic core, which is generated in situ, and was used to construct both a 25-membered library of druglike 3-aminoindolizines, and selected examples of a 100-member virtual library. This general protocol allows a broad range of acylation, alkylation and sulfonamidation reactions to be performed in conjunction with a tandem Sonogashira coupling/cycloisomerization sequence. All three synthetic steps were conducted under full automation in the flow reactor, with no handling or isolation of intermediates, to afford the desired products in good yields. This fully automated, multistep flow approach opens the way to highly efficient generation of druglike heterocyclic systems as part of a lead discovery strategy or within a lead optimization program.
Acoustic-sensor-based detection of damage in composite aircraft structures
NASA Astrophysics Data System (ADS)
Foote, Peter; Martin, Tony; Read, Ian
2004-03-01
Acoustic emission detection is a well-established method of locating and monitoring crack development in metal structures. The technique has been adapted to test facilities for non-destructive testing applications. Deployment as an operational or on-line automated damage detection technology in vehicles is posing greater challenges. A clear requirement of potential end-users of such systems is a level of automation capable of delivering low-level diagnosis information. The output from the system is in the form of "go", "no-go" indications of structural integrity or immediate maintenance actions. This level of automation requires significant data reduction and processing. This paper describes recent trials of acoustic emission detection technology for the diagnosis of damage in composite aerospace structures. The technology comprises low profile detection sensors using piezo electric wafers encapsulated in polymer film ad optical sensors. Sensors are bonded to the structure"s surface and enable acoustic events from the loaded structure to be located by triangulation. Instrumentation has been enveloped to capture and parameterise the sensor data in a form suitable for low-bandwidth storage and transmission.
Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.
2012-01-01
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914
Emerging CFD technologies and aerospace vehicle design
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.
1995-01-01
With the recent focus on the needs of design and applications CFD, research groups have begun to address the traditional bottlenecks of grid generation and surface modeling. Now, a host of emerging technologies promise to shortcut or dramatically simplify the simulation process. This paper discusses the current status of these emerging technologies. It will argue that some tools are already available which can have positive impact on portions of the design cycle. However, in most cases, these tools need to be integrated into specific engineering systems and process cycles to be used effectively. The rapidly maturing status of unstructured and Cartesian approaches for inviscid simulations makes suggests the possibility of highly automated Euler-boundary layer simulations with application to loads estimation and even preliminary design. Similarly, technology is available to link block structured mesh generation algorithms with topology libraries to avoid tedious re-meshing of topologically similar configurations. Work in algorithmic based auto-blocking suggests that domain decomposition and point placement operations in multi-block mesh generation may be properly posed as problems in Computational Geometry, and following this approach may lead to robust algorithmic processes for automatic mesh generation.
Multi-Mission Automated Task Invocation Subsystem
NASA Technical Reports Server (NTRS)
Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.
2009-01-01
Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,
Management by Trajectory: Trajectory Management Study Report
NASA Technical Reports Server (NTRS)
Leiden, Kenneth; Atkins, Stephen; Fernandes, Alicia D.; Kaler, Curt; Bell, Alan; Kilbourne, Todd; Evans, Mark
2017-01-01
In order to realize the full potential of the Next Generation Air Transportation System (NextGen), improved management along planned trajectories between air navigation service providers (ANSPs) and system users (e.g., pilots and airline dispatchers) is needed. Future automation improvements and increased data communications between aircraft and ground automation would make the concept of Management by Trajectory (MBT) possible.
Evaluation of an automated room decontamination device using aerosolized peracetic acid.
Mana, Thriveen S C; Sitzlar, Brett; Cadnum, Jennifer L; Jencson, Annette L; Koganti, Sreelatha; Donskey, Curtis J
2017-03-01
Because manual cleaning is often suboptimal, there is increasing interest in use of automated devices for room decontamination. We demonstrated that an ultrasonic room fogging system that generates submicron droplets of peracetic acid and hydrogen peroxide eliminated Clostridium difficile spores and vegetative pathogens from exposed carriers in hospital rooms and adjacent bathrooms. Published by Elsevier Inc.
Apparatus for automated testing of biological specimens
Layne, Scott P.; Beugelsdijk, Tony J.
1999-01-01
An apparatus for performing automated testing of infections biological specimens is disclosed. The apparatus comprise a process controller for translating user commands into test instrument suite commands, and a test instrument suite comprising a means to treat the specimen to manifest an observable result, and a detector for measuring the observable result to generate specimen test results.
ERIC Educational Resources Information Center
Cotos, Elena
2010-01-01
This dissertation presents an innovative approach to the development and empirical evaluation of Automated Writing Evaluation (AWE) technology used for teaching and learning. It introduces IADE (Intelligent Academic Discourse Evaluator), a new web-based AWE program that analyzes research article Introduction sections and generates immediate,…
Automation Framework for Flight Dynamics Products Generation
NASA Technical Reports Server (NTRS)
Wiegand, Robert E.; Esposito, Timothy C.; Watson, John S.; Jun, Linda; Shoan, Wendy; Matusow, Carla
2010-01-01
XFDS provides an easily adaptable automation platform. To date it has been used to support flight dynamics operations. It coordinates the execution of other applications such as Satellite TookKit, FreeFlyer, MATLAB, and Perl code. It provides a mechanism for passing messages among a collection of XFDS processes, and allows sending and receiving of GMSEC messages. A unified and consistent graphical user interface (GUI) is used for the various tools. Its automation configuration is stored in text files, and can be edited either directly or using the GUI.
Automated software development workstation
NASA Technical Reports Server (NTRS)
1986-01-01
Engineering software development was automated using an expert system (rule-based) approach. The use of this technology offers benefits not available from current software development and maintenance methodologies. A workstation was built with a library or program data base with methods for browsing the designs stored; a system for graphical specification of designs including a capability for hierarchical refinement and definition in a graphical design system; and an automated code generation capability in FORTRAN. The workstation was then used in a demonstration with examples from an attitude control subsystem design for the space station. Documentation and recommendations are presented.
Automated verification of flight software. User's manual
NASA Technical Reports Server (NTRS)
Saib, S. H.
1982-01-01
(Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.