Science.gov

Sample records for automated parallel cultures

  1. Automated Parallel Capillary Electrophoretic System

    DOEpatents

    Li, Qingbo; Kane, Thomas E.; Liu, Changsheng; Sonnenschein, Bernard; Sharer, Michael V.; Kernan, John R.

    2000-02-22

    An automated electrophoretic system is disclosed. The system employs a capillary cartridge having a plurality of capillary tubes. The cartridge has a first array of capillary ends projecting from one side of a plate. The first array of capillary ends are spaced apart in substantially the same manner as the wells of a microtitre tray of standard size. This allows one to simultaneously perform capillary electrophoresis on samples present in each of the wells of the tray. The system includes a stacked, dual carousel arrangement to eliminate cross-contamination resulting from reuse of the same buffer tray on consecutive executions from electrophoresis. The system also has a gel delivery module containing a gel syringe/a stepper motor or a high pressure chamber with a pump to quickly and uniformly deliver gel through the capillary tubes. The system further includes a multi-wavelength beam generator to generate a laser beam which produces a beam with a wide range of wavelengths. An off-line capillary reconditioner thoroughly cleans a capillary cartridge to enable simultaneous execution of electrophoresis with another capillary cartridge. The streamlined nature of the off-line capillary reconditioner offers the advantage of increased system throughput with a minimal increase in system cost.

  2. At the intersection of automation and culture

    NASA Technical Reports Server (NTRS)

    Sherman, P. J.; Wiener, E. L.

    1995-01-01

    The crash of an automated passenger jet at Nagoya, Japan, in 1995, is used as an example of crew error in using automatic systems. Automation provides pilots with the ability to perform tasks in various ways. National culture is cited as a factor that affects how a pilot and crew interact with each other and equipment.

  3. Automated Parallel Recordings of Topologically Identified Single Ion Channels

    PubMed Central

    Kawano, Ryuji; Tsuji, Yutaro; Sato, Koji; Osaki, Toshihisa; Kamiya, Koki; Hirano, Minako; Ide, Toru; Miki, Norihisa; Takeuchi, Shoji

    2013-01-01

    Although ion channels are attractive targets for drug discovery, the systematic screening of ion channel-targeted drugs remains challenging. To facilitate automated single ion-channel recordings for the analysis of drug interactions with the intra- and extracellular domain, we have developed a parallel recording methodology using artificial cell membranes. The use of stable lipid bilayer formation in droplet chamber arrays facilitated automated, parallel, single-channel recording from reconstituted native and mutated ion channels. Using this system, several types of ion channels, including mutated forms, were characterised by determining the protein orientation. In addition, we provide evidence that both intra- and extracellular amyloid-beta fragments directly inhibit the channel open probability of the hBK channel. This automated methodology provides a high-throughput drug screening system for the targeting of ion channels and a data-intensive analysis technique for studying ion channel gating mechanisms. PMID:23771282

  4. Automated maintenance of embryonic stem cell cultures.

    PubMed

    Terstegge, Stefanie; Laufenberg, Iris; Pochert, Jörg; Schenk, Sabine; Itskovitz-Eldor, Joseph; Endl, Elmar; Brüstle, Oliver

    2007-01-01

    Embryonic stem cell (ESC) technology provides attractive perspectives for generating unlimited numbers of somatic cells for disease modeling and compound screening. A key prerequisite for these industrial applications are standardized and automated systems suitable for stem cell processing. Here we demonstrate that mouse and human ESC propagated by automated culture maintain their mean specific growth rates, their capacity for multi-germlayer differentiation, and the expression of the pluripotency-associated markers SSEA-1/Oct-4 and Tra-1-60/Tra-1-81/Oct-4, respectively. The feasibility of ESC culture automation may greatly facilitate the use of this versatile cell source for a variety of biomedical applications.

  5. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  6. Hierarchically parallelized constrained nonlinear solvers with automated substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Kwang, A.

    1991-01-01

    This paper develops a parallelizable multilevel constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure, both sequential, partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capacity to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  7. Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Kwang, Abel

    1994-01-01

    This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  8. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  9. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  10. Automating the selection of standard parallels for conic map projections

    NASA Astrophysics Data System (ADS)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  11. A Method For Parallel, Automated, Thermal Cycling of Submicroliter Samples

    PubMed Central

    Nakane, Jonathan; Broemeling, David; Donaldson, Roger; Marziali, Andre; Willis, Thomas D.; O'Keefe, Matthew; Davis, Ronald W.

    2001-01-01

    A large fraction of the cost of DNA sequencing and other DNA-analysis processes results from the reagent costs incurred during cycle sequencing or PCR. In particular, the high cost of the enzymes and dyes used in these processes often results in thermal cycling costs exceeding $0.50 per sample. In the case of high-throughput DNA sequencing, this is a significant and unnecessary expense. Improved detection efficiency of new sequencing instrumentation allows the reaction volumes for cycle sequencing to be scaled down to one-tenth of presently used volumes, resulting in at least a 10-fold decrease in the cost of this process. However, commercially available thermal cyclers and automated reaction setup devices have inherent design limitations which make handling volumes of <1 μL extremely difficult. In this paper, we describe a method for thermal cycling aimed at reliable, automated cycling of submicroliter reaction volumes. PMID:11230168

  12. Automated adherent human cell culture (mesenchymal stem cells).

    PubMed

    Thomas, Robert; Ratcliffe, Elizabeth

    2012-01-01

    Human cell culture processes developed at research laboratory scale need to be translated to large-scale production processes to achieve commercial application to a large market. To allow this transition of scale with consistent process performance and control of costs, it will be necessary to reduce manual processing and increase automation. There are a number of commercially available platforms that will reduce manual process intervention and improve process control for different culture formats. However, in many human cell-based applications, there is currently a need to remain close to the development format, usually adherent culture on cell culture plastic or matrix-coated wells or flasks due to deterioration of cell quality in other environments, such as suspension. This chapter presents an example method for adherent automated human stem cell culture using a specific automated flask handling platform, the CompacT SelecT.

  13. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilties to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  14. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  15. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  16. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    SciTech Connect

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  17. Automated CFD Parameter Studies on Distributed Parallel Computers

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Aftosmis, Michael; Pandya, Shishir; Tejnil, Edward; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The objective of the current work is to build a prototype software system which will automated the process of running CFD jobs on Information Power Grid (IPG) resources. This system should remove the need for user monitoring and intervention of every single CFD job. It should enable the use of many different computers to populate a massive run matrix in the shortest time possible. Such a software system has been developed, and is known as the AeroDB script system. The approach taken for the development of AeroDB was to build several discrete modules. These include a database, a job-launcher module, a run-manager module to monitor each individual job, and a web-based user portal for monitoring of the progress of the parameter study. The details of the design of AeroDB are presented in the following section. The following section provides the results of a parameter study which was performed using AeroDB for the analysis of a reusable launch vehicle (RLV). The paper concludes with a section on the lessons learned in this effort, and ideas for future work in this area.

  18. Flexible automation of cell culture and tissue engineering tasks.

    PubMed

    Knoll, Alois; Scherer, Torsten; Poggendorf, Iris; Lütkemeyer, Dirk; Lehmann, Jürgen

    2004-01-01

    Until now, the predominant use cases of industrial robots have been routine handling tasks in the automotive industry. In biotechnology and tissue engineering, in contrast, only very few tasks have been automated with robots. New developments in robot platform and robot sensor technology, however, make it possible to automate plants that largely depend on human interaction with the production process, e.g., for material and cell culture fluid handling, transportation, operation of equipment, and maintenance. In this paper we present a robot system that lends itself to automating routine tasks in biotechnology but also has the potential to automate other production facilities that are similar in process structure. After motivating the design goals, we describe the system and its operation, illustrate sample runs, and give an assessment of the advantages. We conclude this paper by giving an outlook on possible further developments.

  19. Flexible automation of cell culture and tissue engineering tasks.

    PubMed

    Knoll, Alois; Scherer, Torsten; Poggendorf, Iris; Lütkemeyer, Dirk; Lehmann, Jürgen

    2004-01-01

    Until now, the predominant use cases of industrial robots have been routine handling tasks in the automotive industry. In biotechnology and tissue engineering, in contrast, only very few tasks have been automated with robots. New developments in robot platform and robot sensor technology, however, make it possible to automate plants that largely depend on human interaction with the production process, e.g., for material and cell culture fluid handling, transportation, operation of equipment, and maintenance. In this paper we present a robot system that lends itself to automating routine tasks in biotechnology but also has the potential to automate other production facilities that are similar in process structure. After motivating the design goals, we describe the system and its operation, illustrate sample runs, and give an assessment of the advantages. We conclude this paper by giving an outlook on possible further developments. PMID:15575718

  20. Automation of 3D cell culture using chemically defined hydrogels.

    PubMed

    Rimann, Markus; Angres, Brigitte; Patocchi-Tenzer, Isabel; Braum, Susanne; Graf-Hausner, Ursula

    2014-04-01

    Drug development relies on high-throughput screening involving cell-based assays. Most of the assays are still based on cells grown in monolayer rather than in three-dimensional (3D) formats, although cells behave more in vivo-like in 3D. To exemplify the adoption of 3D techniques in drug development, this project investigated the automation of a hydrogel-based 3D cell culture system using a liquid-handling robot. The hydrogel technology used offers high flexibility of gel design due to a modular composition of a polymer network and bioactive components. The cell inert degradation of the gel at the end of the culture period guaranteed the harmless isolation of live cells for further downstream processing. Human colon carcinoma cells HCT-116 were encapsulated and grown in these dextran-based hydrogels, thereby forming 3D multicellular spheroids. Viability and DNA content of the cells were shown to be similar in automated and manually produced hydrogels. Furthermore, cell treatment with toxic Taxol concentrations (100 nM) had the same effect on HCT-116 cell viability in manually and automated hydrogel preparations. Finally, a fully automated dose-response curve with the reference compound Taxol showed the potential of this hydrogel-based 3D cell culture system in advanced drug development.

  1. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  2. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  3. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    NASA Astrophysics Data System (ADS)

    TakYu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-06-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL-1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications.

  4. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    PubMed Central

    Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-01-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253

  5. Mini-scale bioprocessing systems for highly parallel animal cell cultures.

    PubMed

    Kim, Beum Jun; Diao, Jinpian; Shuler, Michael L

    2012-01-01

    Animal cells have been used extensively in therapeutic protein production. The growth of animal cells and the expression of therapeutic proteins are highly dependent on the culturing environments. A large number of experimental permutations need to be explored to identify the optimal culturing conditions. Miniaturized bioreactors are well suited for such tasks as they offer high-throughput parallel operation and reduce cost of reagents. They can also be automated and be coupled to downstream analytical units for online measurements of culture products. This review summarizes the current status of miniaturized bioreactors for animal cell cultivation based on the design categories: microtiter plates, flasks, stirred tank reactors, novel designs with active mixing, and microfluidic cell culture devices. We compare cell density and product titer, for batch or fed-batch modes for each system. Monitoring/controlling devices for engineering parameters such as pH, dissolved oxygen, and dissolved carbon dioxide, which could be applied to such systems, are summarized. Finally, mini-scale tools for process performance evaluation for animal cell cultures are discussed: total cell density, cell viability, product titer and quality, substrates, and metabolites profiles. PMID:22522970

  6. Parallel Microfluidic Chemosensitivity Testing on Individual Slice Cultures

    PubMed Central

    Chang, Tim C.; Mikheev, Andrei M.; Huynh, Wilson; Monnat, Raymond J.; Rostomily, Robert C.; Folch, Albert

    2014-01-01

    There is a critical unmet need to tailor chemotherapies to individual patients. Personalized approaches could lower treatment toxicity, improve the patient’s quality of life, and ultimately reduce mortality. However, existing models of drug activity (based on tumor cells in culture or animal models) cannot accurately predict how drugs act in patients in time to inform the best possible treatment. Here we demonstrate a microfluidic device that integrates live slice cultures with an intuitive multi well platform that allows for exposing the slices to multiple compounds at once or in sequence. We demonstrate the response of live mouse brain slices to a range of drug doses in parallel. Drug response is measured by imaging of markers for cell apoptosis and for cell death. The platform has the potential to allow for identifying the subset of therapies of greatest potential value to individual patients, on a timescale rapid enough to guide therapeutic decision-making. PMID:25275698

  7. Digital microfluidics for automated hanging drop cell spheroid culture.

    PubMed

    Aijian, Andrew P; Garrell, Robin L

    2015-06-01

    Cell spheroids are multicellular aggregates, grown in vitro, that mimic the three-dimensional morphology of physiological tissues. Although there are numerous benefits to using spheroids in cell-based assays, the adoption of spheroids in routine biomedical research has been limited, in part, by the tedious workflow associated with spheroid formation and analysis. Here we describe a digital microfluidic platform that has been developed to automate liquid-handling protocols for the formation, maintenance, and analysis of multicellular spheroids in hanging drop culture. We show that droplets of liquid can be added to and extracted from through-holes, or "wells," and fabricated in the bottom plate of a digital microfluidic device, enabling the formation and assaying of hanging drops. Using this digital microfluidic platform, spheroids of mouse mesenchymal stem cells were formed and maintained in situ for 72 h, exhibiting good viability (>90%) and size uniformity (% coefficient of variation <10% intraexperiment, <20% interexperiment). A proof-of-principle drug screen was performed on human colorectal adenocarcinoma spheroids to demonstrate the ability to recapitulate physiologically relevant phenomena such as insulin-induced drug resistance. With automatable and flexible liquid handling, and a wide range of in situ sample preparation and analysis capabilities, the digital microfluidic platform provides a viable tool for automating cell spheroid culture and analysis.

  8. Final report for''automated diagnosis of large scale parallel applications''

    SciTech Connect

    Karavanic, K L

    2000-11-17

    The work performed is part of a continuing research project, PPerfDB, headed by Dr. Karavanic. We are studying the application of experiment management techniques to the problems associated with gathering, storing, and using performance data with the goal of achieving completely automated diagnosis of application and system bottlenecks. This summer we focused on incorporating heterogeneous data from a variety of tools, applications, and platforms, and on designing novel techniques for automated performance diagnosis. The Experiment Management paradigm is a useful approach for designing a tool that will automatically diagnose performance problems in large-scale parallel applications. The ability to gather, store, and use performance data gathered over time from different executions and using different collection tools enables more sophisticated approaches to performance diagnosis and to performance evaluation more generally. We look forward to continuing our efforts by further development and analysis of online diagnosis using historical data, and by investigating performance data and diagnosis gathered from mixed MPUOpenMP applications.

  9. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Ng, K.T.; Nadeem, A.

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  10. Functionalized Polymers-Emerging Versatile Tools for Solution-Phase Chemistry and Automated Parallel Synthesis.

    PubMed

    Kirschning, Andreas; Monenschein, Holger; Wittenberg, Rüdiger

    2001-02-16

    As part of the dramatic changes associated with the need for preparing compound libraries in pharmaceutical and agrochemical research laboratories, industry searches for new technologies that allow for the automation of synthetic processes. Since the pioneering work by Merrifield polymeric supports have been identified to play a key role in this field however, polymer-assisted solution-phase synthesis which utilizes immobilized reagents and catalysts has only recently begun to flourish. Polymer-assisted solution-phase synthesis has various advantages over conventional solution-phase chemistry, such as the ease of separation of the supported species from a reaction mixture by filtration and washing, the opportunity to use an excess of the reagent to force the reaction to completion without causing workup problems, and the adaptability to continuous-flow processes. Various strategies for employing functionalized polymers stoichiometrically have been developed. Apart from reagents that are covalently or ionically attached to the polymeric backbone and which are released into solution in the presence of a suitable substrate, scavenger reagents play an increasingly important role in purifying reaction mixtures. Employing functionalized polymers in solution-phase synthesis has been shown to be extremely useful in automated parallel synthesis and multistep sequences. So far, compound libraries containing as many as 88 members have been generated by using several polymer-bound reagents one after another. Furthermore, it has been demonstrated that complex natural products like the alkaloids (+/-)-oxomaritidine and (+/-)-epimaritidine can be prepared by a sequence of five and six consecutive polymer-assisted steps, respectively, and the potent analgesic compound (+/-)-epibatidine in twelve linear steps ten of which are based on functionalized polymers. These developments reveal the great future prospects of polymer-assisted solution-phase synthesis.

  11. Automated integration of genomic physical mapping data via parallel simulated annealing

    SciTech Connect

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  12. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    NASA Technical Reports Server (NTRS)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  13. Long-term maintenance of human induced pluripotent stem cells by automated cell culture system.

    PubMed

    Konagaya, Shuhei; Ando, Takeshi; Yamauchi, Toshiaki; Suemori, Hirofumi; Iwata, Hiroo

    2015-01-01

    Pluripotent stem cells, such as embryonic stem cells and induced pluripotent stem (iPS) cells, are regarded as new sources for cell replacement therapy. These cells can unlimitedly expand under undifferentiated conditions and be differentiated into multiple cell types. Automated culture systems enable the large-scale production of cells. In addition to reducing the time and effort of researchers, an automated culture system improves the reproducibility of cell cultures. In the present study, we newly designed a fully automated cell culture system for human iPS maintenance. Using an automated culture system, hiPS cells maintained their undifferentiated state for 60 days. Automatically prepared hiPS cells had a potency of differentiation into three germ layer cells including dopaminergic neurons and pancreatic cells. PMID:26573336

  14. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    PubMed Central

    Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248

  15. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    SciTech Connect

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary; Liu, Mingliang; Logan, Jeremy S; Podhorszki, Norbert; Choi, Jong Youl; Klasky, Scott A

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.

  16. "Parallel Leadership in an "Unparallel" World"--Cultural Constraints on the Transferability of Western Educational Leadership Theories across Cultures

    ERIC Educational Resources Information Center

    Goh, Jonathan Wee Pin

    2009-01-01

    With the global economy becoming more integrated, the issues of cross-cultural relevance and transferability of leadership theories and practices have become increasingly urgent. Drawing upon the concept of parallel leadership in schools proposed by Crowther, Kaagan, Ferguson, and Hann as an example, the purpose of this paper is to examine the…

  17. Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram

    PubMed Central

    Zang, Pengxiao; Liu, Gangjun; Zhang, Miao; Dongye, Changlei; Wang, Jie; Pechauer, Alex D.; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang

    2016-01-01

    We propose an innovative registration method to correct motion artifacts for wide-field optical coherence tomography angiography (OCTA) acquired by ultrahigh-speed swept-source OCT (>200 kHz A-scan rate). Considering that the number of A-scans along the fast axis is much higher than the number of positions along slow axis in the wide-field OCTA scan, a non-orthogonal scheme is introduced. Two en face angiograms in the vertical priority (2 y-fast) are divided into microsaccade-free parallel strips. A gross registration based on large vessels and a fine registration based on small vessels are sequentially applied to register parallel strips into a composite image. This technique is extended to automatically montage individual registered, motion-free angiograms into an ultrawide-field view. PMID:27446709

  18. Automated parallel synthesis of 5'-triphosphate oligonucleotides and preparation of chemically modified 5'-triphosphate small interfering RNA.

    PubMed

    Zlatev, Ivan; Lackey, Jeremy G; Zhang, Ligang; Dell, Amy; McRae, Kathy; Shaikh, Sarfraz; Duncan, Richard G; Rajeev, Kallanthottathil G; Manoharan, Muthiah

    2013-02-01

    A fully automated chemical method for the parallel and high-throughput solid-phase synthesis of 5'-triphosphate and 5'-diphosphate oligonucleotides is described. The desired full-length oligonucleotides were first constructed using standard automated DNA/RNA solid-phase synthesis procedures. Then, on the same column and instrument, efficient implementation of an uninterrupted sequential cycle afforded the corresponding unmodified or chemically modified 5'-triphosphates and 5'-diphosphates. The method was readily translated into a scalable and high-throughput synthesis protocol compatible with the current DNA/RNA synthesizers yielding a large variety of unique 5'-polyphosphorylated oligonucleotides. Using this approach, we accomplished the synthesis of chemically modified 5'-triphosphate oligonucleotides that were annealed to form small-interfering RNAs (ppp-siRNAs), a potentially interesting class of novel RNAi therapeutic tools. The attachment of the 5'-triphosphate group to the passenger strand of a siRNA construct did not induce a significant improvement in the in vitro RNAi-mediated gene silencing activity nor a strong specific in vitro RIG-I activation. The reported method will enable the screening of many chemically modified ppp-siRNAs, resulting in a novel bi-functional RNAi therapeutic platform. PMID:23260577

  19. Fully automated single-use stirred-tank bioreactors for parallel microbial cultivations.

    PubMed

    Kusterer, Andreas; Krause, Christian; Kaufmann, Klaus; Arnold, Matthias; Weuster-Botz, Dirk

    2008-04-01

    Single-use stirred tank bioreactors on a 10-mL scale operated in a magnetic-inductive bioreaction block for 48 bioreactors were equipped with individual stirrer-speed tracing, as well as individual DO- and pH-monitoring and control. A Hall-effect sensor system was integrated into the bioreaction block to measure individually the changes in magnetic field density caused by the rotating permanent magnets. A restart of the magnetic inductive drive was initiated automatically each time a Hall-effect sensor indicates one non-rotating gas-inducing stirrer. Individual DO and pH were monitored online by measuring the fluorescence decay time of two chemical sensors immobilized at the bottom of each single-use bioreactor. Parallel DO measurements were shown to be very reliable and independently from the fermentation media applied in this study for the cultivation of Escherichia coli and Saccharomyces cerevisiae. The standard deviation of parallel pH measurements was pH 0.1 at pH 7.0 at the minimum and increased to a standard deviation of pH 0.2 at pH 6.0 or at pH 8.5 with the complex medium applied for fermentations with S. cerevisiae. Parallel pH-control was thus shown to be meaningful with a tolerance band around the pH set-point of +/- pH 0.2 if the set-point is pH 6.0 or lower.

  20. Development of an automated mid-scale parallel protein purification system for antibody purification and affinity chromatography.

    PubMed

    Zhang, Chi; Long, Alexander M; Swalm, Brooke; Charest, Ken; Wang, Yan; Hu, Jiali; Schulz, Craig; Goetzinger, Wolfgang; Hall, Brian E

    2016-12-01

    Protein purification is often a bottleneck during protein generation for large molecule drug discovery. Therapeutic antibody campaigns typically require the purification of hundreds of monoclonal antibodies (mAbs) during the hybridoma process and lead optimization. With the increase in high-throughput cloning, faster DNA sequencing, and the use of parallel protein expression systems, a need for high-throughput purification approaches has evolved, particularly in the midsize range between 20 ml and 100 ml. To address this we modified a four channel Gilson solid phase extraction system (referred to as MG-SPE) with switching valves and sample holding loops to be able to perform standard affinity purification using commercially available columns and micro-titer format deep well blocks. By running 4 samples in parallel, the MG-SPE has the capacity to purify up to 24 samples of greater than 50 ml each using a single-step affinity purification protocol or a two-step protocol consisting of affinity chromatography followed by desalting/buffer exchange overnight (∼12 h run time). Our evaluation of affinity purification using mAbs and Fc-fusion proteins from mammalian cell supernatants demonstrates that the MG-SPE compared favorably with industry standard systems for both protein quality and yield. Overall the system is simple to operate and fills a void in purification processes where a simple, efficient, automated system is needed for affinity purification of midsize research samples. PMID:27498022

  1. Development of an automated mid-scale parallel protein purification system for antibody purification and affinity chromatography.

    PubMed

    Zhang, Chi; Long, Alexander M; Swalm, Brooke; Charest, Ken; Wang, Yan; Hu, Jiali; Schulz, Craig; Goetzinger, Wolfgang; Hall, Brian E

    2016-12-01

    Protein purification is often a bottleneck during protein generation for large molecule drug discovery. Therapeutic antibody campaigns typically require the purification of hundreds of monoclonal antibodies (mAbs) during the hybridoma process and lead optimization. With the increase in high-throughput cloning, faster DNA sequencing, and the use of parallel protein expression systems, a need for high-throughput purification approaches has evolved, particularly in the midsize range between 20 ml and 100 ml. To address this we modified a four channel Gilson solid phase extraction system (referred to as MG-SPE) with switching valves and sample holding loops to be able to perform standard affinity purification using commercially available columns and micro-titer format deep well blocks. By running 4 samples in parallel, the MG-SPE has the capacity to purify up to 24 samples of greater than 50 ml each using a single-step affinity purification protocol or a two-step protocol consisting of affinity chromatography followed by desalting/buffer exchange overnight (∼12 h run time). Our evaluation of affinity purification using mAbs and Fc-fusion proteins from mammalian cell supernatants demonstrates that the MG-SPE compared favorably with industry standard systems for both protein quality and yield. Overall the system is simple to operate and fills a void in purification processes where a simple, efficient, automated system is needed for affinity purification of midsize research samples.

  2. Automated, scalable culture of human embryonic stem cells in feeder-free conditions.

    PubMed

    Thomas, Rob J; Anderson, David; Chandra, Amit; Smith, Nigel M; Young, Lorraine E; Williams, David; Denning, Chris

    2009-04-15

    Large-scale manufacture of human embryonic stem cells (hESCs) is prerequisite to their widespread use in biomedical applications. However, current hESC culture strategies are labor-intensive and employ highly variable processes, presenting challenges for scaled production and commercial development. Here we demonstrate that passaging of the hESC lines, HUES7, and NOTT1, with trypsin in feeder-free conditions, is compatible with complete automation on the CompacT SelecT, a commercially available and industrially relevant robotic platform. Pluripotency was successfully retained, as evidenced by consistent proliferation during serial passage, expression of stem cell markers (OCT4, NANOG, TRA1-81, and SSEA-4), stable karyotype, and multi-germlayer differentiation in vitro, including to pharmacologically responsive cardiomyocytes. Automation of hESC culture will expedite cell-use in clinical, scientific, and industrial applications.

  3. Development, parallelization, and automation of a gas-inducing milliliter-scale bioreactor for high-throughput bioprocess design (HTBD).

    PubMed

    Puskeiler, R; Kaufmann, K; Weuster-Botz, D

    2005-03-01

    A novel milliliter-scale bioreactor equipped with a gas-inducing impeller was developed with oxygen transfer coefficients as high as in laboratory and industrial stirred-tank bioreactors. The bioreactor reaches oxygen transfer coefficients of >0.4 s(-1). Oxygen transfer coefficients of >0.2 s(-1) can be maintained over a range of 8- to 12-mL reaction volume. A reaction block with integrated heat exchangers was developed for 48-mL-scale bioreactors. The block can be closed with a single gas cover spreading sterile process gas from a central inlet into the headspace of all bioreactors. The gas cover simultaneously acts as a sterile barrier, making the reaction block a stand-alone device that represents an alternative to 48 parallel-operated shake flasks on a much smaller footprint. Process control software was developed to control a liquid-handling system for automated sampling, titration of pH, substrate feeding, and a microtiter plate reader for automated atline pH and atline optical density analytics. The liquid-handling parameters for titration agent, feeding solution, and cell samples were optimized to increase data quality. A simple proportional pH-control algorithm and intermittent titration of pH enabled Escherichia coli growth to a dry cell weight of 20.5 g L(-1) in fed-batch cultivation with air aeration. Growth of E. coli at the milliliter scale (10 mL) was shown to be equivalent to laboratory scale (3 L) with regard to growth rate, mu, and biomass yield, Y(XS).

  4. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  5. Explorations of Space-Charge Limits in Parallel-Plate Diodes and Associated Techniques for Automation

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, Benjamin

    Space-charge limited flow is a topic of much interest and varied application. We extend existing understanding of space-charge limits by simulations, and develop new tools and techniques for doing these simulations along the way. The Child-Langmuir limit is a simple analytic solution for space-charge limited current density in a one-dimensional diode. It has been previously extended to two dimensions by numerical calculation in planar geometries. By considering an axisymmetric cylindrical system with axial emission from a circular cathode of finite radius r and outer drift tube R > r and gap length L, we further examine the space charge limit in two dimensions. We simulate a two-dimensional axisymmetric parallel plate diode of various aspect ratios (r/L), and develop a scaling law for the measured two-dimensional space-charge limit (2DSCL) relative to the Child-Langmuir limit as a function of the aspect ratio of the diode. These simulations are done with a large (100T) longitudinal magnetic field to restrict electron motion to 1D, with the two-dimensional particle-in-cell simulation code OOPIC. We find a scaling law that is a monotonically decreasing function of this aspect ratio, and the one-dimensional result is recovered in the limit as r >> L. The result is in good agreement with prior results in planar geometry, where the emission area is proportional to the cathode width. We find a weak contribution from the effects of the drift tube for current at the beam edge, and a strong contribution of high current-density "wings" at the outer-edge of the beam, with a very large relative contribution when the beam is narrow. Mechanisms for enhancing current beyond the Child-Langmuir limit remain a matter of great importance. We analyze the enhancement effects of upstream ion injection on the transmitted current in a one-dimensional parallel plate diode. Electrons are field-emitted at the cathode, and ions are injected at a controlled current from the anode. An analytic

  6. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    NASA Technical Reports Server (NTRS)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  7. Automated dynamic fed-batch process and media optimization for high productivity cell culture process development.

    PubMed

    Lu, Franklin; Toh, Poh Choo; Burnett, Iain; Li, Feng; Hudson, Terry; Amanullah, Ashraf; Li, Jincai

    2013-01-01

    Current industry practices for large-scale mammalian cell cultures typically employ a standard platform fed-batch process with fixed volume bolus feeding. Although widely used, these processes are unable to respond to actual nutrient consumption demands from the culture, which can result in accumulation of by-products and depletion of certain nutrients. This work demonstrates the application of a fully automated cell culture control, monitoring, and data processing system to achieve significant productivity improvement via dynamic feeding and media optimization. Two distinct feeding algorithms were used to dynamically alter feed rates. The first method is based upon on-line capacitance measurements where cultures were fed based on growth and nutrient consumption rates estimated from integrated capacitance. The second method is based upon automated glucose measurements obtained from the Nova Bioprofile FLEX® autosampler where cultures were fed to maintain a target glucose level which in turn maintained other nutrients based on a stoichiometric ratio. All of the calculations were done automatically through in-house integration with a Delta V process control system. Through both media and feed strategy optimization, a titer increase from the original platform titer of 5 to 6.3 g/L was achieved for cell line A, and a substantial titer increase of 4 to over 9 g/L was achieved for cell line B with comparable product quality. Glucose was found to be the best feed indicator, but not all cell lines benefited from dynamic feeding and optimized feed media was critical to process improvement. Our work demonstrated that dynamic feeding has the ability to automatically adjust feed rates according to culture behavior, and that the advantage can be best realized during early and rapid process development stages where different cell lines or large changes in culture conditions might lead to dramatically different nutrient demands.

  8. Performance of Gram staining on blood cultures flagged negative by an automated blood culture system.

    PubMed

    Peretz, A; Isakovich, N; Pastukh, N; Koifman, A; Glyatman, T; Brodsky, D

    2015-08-01

    Blood is one of the most important specimens sent to a microbiology laboratory for culture. Most blood cultures are incubated for 5-7 days, except in cases where there is a suspicion of infection caused by microorganisms that proliferate slowly, or infections expressed by a small number of bacteria in the bloodstream. Therefore, at the end of incubation, misidentification of positive cultures and false-negative results are a real possibility. The aim of this work was to perform a confirmation by Gram staining of the lack of any microorganisms in blood cultures that were identified as negative by the BACTEC™ FX system at the end of incubation. All bottles defined as negative by the BACTEC FX system were Gram-stained using an automatic device and inoculated on solid growth media. In our work, 15 cultures that were defined as negative by the BACTEC FX system at the end of the incubation were found to contain microorganisms when Gram-stained. The main characteristic of most bacteria and fungi growing in the culture bottles that were defined as negative was slow growth. This finding raises a problematic issue concerning the need to perform Gram staining of all blood cultures, which could overload the routine laboratory work, especially laboratories serving large medical centers and receiving a large number of blood cultures.

  9. Dereplication by automated ribotyping of a competitive exclusion culture bacterial isolate library.

    PubMed

    Sheffield, Cynthia; Andrews, Kate; Harvey, Roger; Crippen, Tawni; Nisbet, David

    2006-01-01

    Concerns over the development of antibiotic-resistant bacteria within the food animal industry have intensified the search for natural approaches to the prevention and treatment of bacterial diseases. Competitive exclusion cultures are the foundation of a disease-management strategy based on the use of benign bacterial strains to prevent the establishment of pathogenic bacteria within a specific host. Differentiation of phenotypically ambiguous isolates is a critical step in establishing a manageable library of bacteria for use in the development of defined competitive exclusion cultures. We used automated ribotyping techniques to dereplicate a large collection of phenotypically ambiguous isolates from a continuous-flow competitive exclusion culture. A total of 157 isolates were screened following an EcoRI restriction enzyme digestion. The 157 isolates were resolved into 23 ribogroups, which represents an 85% reduction in the number of isolates in the bacterial isolate library. Seventy-six percent of the isolates fit into one of five ribogroups. This work demonstrated that automated ribotyping is an effective and efficient tool for dereplication of diverse bacterial isolate libraries.

  10. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    NASA Technical Reports Server (NTRS)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  11. A pumpless perfusion cell culture cap with two parallel channel layers keeping the flow rate constant.

    PubMed

    Lee, Dong Woo; Yi, Sang Hyun; Ku, Bosung; Kim, Jhingook

    2012-01-01

    This article presents a novel pumpless perfusion cell culture cap, the gravity-driven flow rate of which is kept constant by the height difference of two parallel channel layers. Previous pumpless perfusion cell culture systems create a gravity-driven flow by means of the hydraulic head difference (Δh) between the source reservoir and the drain reservoir. As more media passes from the source reservoir to the drain reservoir, the source media level decreases and the drain media level increases. Thus, previous works based on a gravity-driven flow were unable to supply a constant flow rate for the perfusion cell culture. However, the proposed perfusion cell culture cap can supply a constant flow rate, because the media level remains unchanged as the media moves laterally through each channel having same media level. In experiments, using the different fluidic resistances, the perfusion cap generated constant flow rates of 871 ± 27 μL h(-1) and 446 ± 11 μL h(-1) . The 871 and 446 μL h(-1) flow rates replace the whole 20 mL medium in the petri dish with a fresh medium for days 1 and 2, respectively. In the perfusion cell (A549 cell line) culture with the 871 μL h(-1) flow rate, the proposed cap can maintain a lactate concentration of about 2200 nmol mL(-1) and an ammonia concentration of about 3200 nmol mL(-1) . Moreover, although the static cell culture maintains cell viability for 5 days, the perfusion cell culture with the 871 μL h(-1) flow rate can maintain cell viability for 9 days. PMID:22927366

  12. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  13. Quantification of Dynamic Morphological Drug Responses in 3D Organotypic Cell Cultures by Automated Image Analysis

    PubMed Central

    Härmä, Ville; Schukov, Hannu-Pekka; Happonen, Antti; Ahonen, Ilmari; Virtanen, Johannes; Siitari, Harri; Åkerfelt, Malin; Lötjönen, Jyrki; Nees, Matthias

    2014-01-01

    Glandular epithelial cells differentiate into complex multicellular or acinar structures, when embedded in three-dimensional (3D) extracellular matrix. The spectrum of different multicellular morphologies formed in 3D is a sensitive indicator for the differentiation potential of normal, non-transformed cells compared to different stages of malignant progression. In addition, single cells or cell aggregates may actively invade the matrix, utilizing epithelial, mesenchymal or mixed modes of motility. Dynamic phenotypic changes involved in 3D tumor cell invasion are sensitive to specific small-molecule inhibitors that target the actin cytoskeleton. We have used a panel of inhibitors to demonstrate the power of automated image analysis as a phenotypic or morphometric readout in cell-based assays. We introduce a streamlined stand-alone software solution that supports large-scale high-content screens, based on complex and organotypic cultures. AMIDA (Automated Morphometric Image Data Analysis) allows quantitative measurements of large numbers of images and structures, with a multitude of different spheroid shapes, sizes, and textures. AMIDA supports an automated workflow, and can be combined with quality control and statistical tools for data interpretation and visualization. We have used a representative panel of 12 prostate and breast cancer lines that display a broad spectrum of different spheroid morphologies and modes of invasion, challenged by a library of 19 direct or indirect modulators of the actin cytoskeleton which induce systematic changes in spheroid morphology and differentiation versus invasion. These results were independently validated by 2D proliferation, apoptosis and cell motility assays. We identified three drugs that primarily attenuated the invasion and formation of invasive processes in 3D, without affecting proliferation or apoptosis. Two of these compounds block Rac signalling, one affects cellular cAMP/cGMP accumulation. Our approach supports

  14. Evaluation of a Multi-Parameter Sensor for Automated, Continuous Cell Culture Monitoring in Bioreactors

    NASA Technical Reports Server (NTRS)

    Pappas, D.; Jeevarajan, A.; Anderson, M. M.

    2004-01-01

    Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments in microgravity. Measurement of cell culture medium allows for the optirn.jzation of culture conditions on orbit to maximize cell growth and minimize unnecessary exchange of medium. While several discrete sensors exist to measure culture health, a multi-parameter sensor would simplify the experimental apparatus. One such sensor, the Paratrend 7, consists of three optical fibers for measuring pH, dissolved oxygen (p02), dissolved carbon dioxide (pC02) , and a thermocouple to measure temperature. The sensor bundle was designed for intra-arterial placement in clinical patients, and potentially can be used in NASA's Space Shuttle and International Space Station biotechnology program bioreactors. Methods: A Paratrend 7 sensor was placed at the outlet of a rotating-wall perfused vessel bioreactor system inoculated with BHK-21 (baby hamster kidney) cells. Cell culture medium (GTSF-2, composed of 40% minimum essential medium, 60% L-15 Leibovitz medium) was manually measured using a bench top blood gas analyzer (BGA, Ciba-Corning). Results: A Paratrend 7 sensor was used over a long-term (>120 day) cell culture experiment. The sensor was able to track changes in cell medium pH, p02, and pC02 due to the consumption of nutrients by the BHK-21. When compared to manually obtained BGA measurements, the sensor had good agreement for pH, p02, and pC02 with bias [and precision] of 0.02 [0.15], 1 mm Hg [18 mm Hg], and -4.0 mm Hg [8.0 mm Hg] respectively. The Paratrend oxygen sensor was recalibrated (offset) periodically due to drift. The bias for the raw (no offset or recalibration) oxygen measurements was 42 mm Hg [38 mm Hg]. The measured response (rise) time of the sensor was 20 +/- 4s for pH, 81 +/- 53s for pC02, 51 +/- 20s for p02. For long-term cell culture measurements, these response times are more than adequate. Based on these findings , the Paratrend sensor could

  15. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    PubMed Central

    Ozcan, Burcin; Negi, Pooran; Laezza, Fernanda; Papadakis, Manos; Labate, Demetrio

    2015-01-01

    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i) image segmentation based on support vector machines with specially designed multiscale filters; (ii) soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma’s surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications. PMID:25853656

  16. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    PubMed

    Ker, Dai Fei Elmer; Weiss, Lee E; Junkers, Silvina N; Chen, Mei; Yin, Zhaozheng; Sandbothe, Michael F; Huh, Seung-il; Eom, Sungeun; Bise, Ryoma; Osuna-Highley, Elvira; Kanade, Takeo; Campbell, Phil G

    2011-01-01

    Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and developing robotic cell

  17. A novel automated bioreactor for scalable process optimisation of haematopoietic stem cell culture.

    PubMed

    Ratcliffe, E; Glen, K E; Workman, V L; Stacey, A J; Thomas, R J

    2012-10-31

    Proliferation and differentiation of haematopoietic stem cells (HSCs) from umbilical cord blood at large scale will potentially underpin production of a number of therapeutic cellular products in development, including erythrocytes and platelets. However, to achieve production processes that are scalable and optimised for cost and quality, scaled down development platforms that can define process parameter tolerances and consequent manufacturing controls are essential. We have demonstrated the potential of a new, automated, 24×15 mL replicate suspension bioreactor system, with online monitoring and control, to develop an HSC proliferation and differentiation process for erythroid committed cells (CD71(+), CD235a(+)). Cell proliferation was relatively robust to cell density and oxygen levels and reached up to 6 population doublings over 10 days. The maximum suspension culture density for a 48 h total media exchange protocol was established to be in the order of 10(7)cells/mL. This system will be valuable for the further HSC suspension culture cost reduction and optimisation necessary before the application of conventional stirred tank technology to scaled manufacture of HSC derived products.

  18. 3D matrix-based cell cultures: Automated analysis of tumor cell survival and proliferation

    PubMed Central

    EKE, IRIS; HEHLGANS, STEPHANIE; SANDFORT, VEIT; CORDES, NILS

    2016-01-01

    Three-dimensional ex vivo cell cultures mimic physiological in vivo growth conditions thereby significantly contributing to our understanding of tumor cell growth and survival, therapy resistance and identification of novel potent cancer targets. In the present study, we describe advanced three-dimensional cell culture methodology for investigating cellular survival and proliferation in human carcinoma cells after cancer therapy including molecular therapeutics. Single cells are embedded into laminin-rich extracellular matrix and can be treated with cytotoxic drugs, ionizing or UV radiation or any other substance of interest when consolidated and approximating in vivo morphology. Subsequently, cells are allowed to grow for automated determination of clonogenic survival (colony number) or proliferation (colony size). The entire protocol of 3D cell plating takes ~1 h working time and pursues for ~7 days before evaluation. This newly developed method broadens the spectrum of exploration of malignant tumors and other diseases and enables the obtainment of more reliable data on cancer treatment efficacy. PMID:26549537

  19. A landscape lake flow pattern design approach based on automated CFD simulation and parallel multiple objective optimization.

    PubMed

    Guo, Hao; Tian, Yimei; Shen, Hailiang; Wang, Yi; Kang, Mengxin

    2016-01-01

    A design approach for determining the optimal flow pattern in a landscape lake is proposed based on FLUENT simulation, multiple objective optimization, and parallel computing. This paper formulates the design into a multi-objective optimization problem, with lake circulation effects and operation cost as two objectives, and solves the optimization problem with non-dominated sorting genetic algorithm II. The lake flow pattern is modelled in FLUENT. The parallelization aims at multiple FLUENT instance runs, which is different from the FLUENT internal parallel solver. This approach: (1) proposes lake flow pattern metrics, i.e. weighted average water flow velocity, water volume percentage of low flow velocity, and variance of flow velocity, (2) defines user defined functions for boundary setting, objective and constraints calculation, and (3) parallels the execution of multiple FLUENT instances runs to significantly reduce the optimization wall-clock time. The proposed approach is demonstrated through a case study for Meijiang Lake in Tianjin, China.

  20. A landscape lake flow pattern design approach based on automated CFD simulation and parallel multiple objective optimization.

    PubMed

    Guo, Hao; Tian, Yimei; Shen, Hailiang; Wang, Yi; Kang, Mengxin

    2016-01-01

    A design approach for determining the optimal flow pattern in a landscape lake is proposed based on FLUENT simulation, multiple objective optimization, and parallel computing. This paper formulates the design into a multi-objective optimization problem, with lake circulation effects and operation cost as two objectives, and solves the optimization problem with non-dominated sorting genetic algorithm II. The lake flow pattern is modelled in FLUENT. The parallelization aims at multiple FLUENT instance runs, which is different from the FLUENT internal parallel solver. This approach: (1) proposes lake flow pattern metrics, i.e. weighted average water flow velocity, water volume percentage of low flow velocity, and variance of flow velocity, (2) defines user defined functions for boundary setting, objective and constraints calculation, and (3) parallels the execution of multiple FLUENT instances runs to significantly reduce the optimization wall-clock time. The proposed approach is demonstrated through a case study for Meijiang Lake in Tianjin, China. PMID:27642835

  1. Integrated microdevice for long-term automated perfusion culture without shear stress and real-time electrochemical monitoring of cells.

    PubMed

    Li, Lin-Mei; Wang, Wei; Zhang, Shu-Hui; Chen, Shi-Jing; Guo, Shi-Shang; Français, Olivier; Cheng, Jie-Ke; Huang, Wei-Hua

    2011-12-15

    Electrochemical techniques based on ultramicroelectrodes (UMEs) play a significant role in real-time monitoring of chemical messengers' release from single cells. Conversely, precise monitoring of cells in vitro strongly depends on the adequate construction of cellular physiological microenvironment. In this paper, we developed a multilayer microdevice which integrated high aspect ratio poly(dimethylsiloxane) (PDMS) microfluidic device for long-term automated perfusion culture of cells without shear stress and an independently addressable microelectrodes array (IAMEA) for electrochemical monitoring of the cultured cells in real time. Novel design using high aspect ratio between circular "moat" and ring-shaped micropillar array surrounding cell culture chamber combined with automated "circular-centre" and "bottom-up" perfusion model successfully provided continuous fresh medium and a stable and uniform microenvironment for cells. Two weeks automated culture of human umbilical endothelial cell line (ECV304) and neuronal differentiation of rat pheochromocytoma (PC12) cells have been realized using this device. Furthermore, the quantal release of dopamine from individual PC12 cells during their culture or propagation process was amperometrically monitored in real time. The multifunctional microdevice developed in this paper integrated cellular microenvironment construction and real-time monitoring of cells during their physiological process, and would possibly provide a versatile platform for cell-based biomedical analysis.

  2. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    NASA Astrophysics Data System (ADS)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  3. PetriJet Platform Technology: An Automated Platform for Culture Dish Handling and Monitoring of the Contents.

    PubMed

    Vogel, Mathias; Boschke, Elke; Bley, Thomas; Lenk, Felix

    2015-08-01

    Due to the size of the required equipment, automated laboratory systems are often unavailable or impractical for use in small- and mid-sized laboratories. However, recent developments in automation engineering provide endless possibilities for incorporating benchtop devices. Here, the authors describe the development of a platform technology to handle sealed culture dishes. The programming is based on the Petri net method and implemented via Codesys V3.5 pbF. The authors developed a system of three independent electrical driven axes capable of handling sealed culture dishes. The device performs two difference processes. First, it automatically obtains an image of every processed culture dish. Second, a server-based image analysis algorithm provides the user with several parameters of the cultivated sample on the culture dish. For demonstration purposes, the authors developed a continuous, systematic, nondestructive, and quantitative method for monitoring the growth of a hairy root culture. New results can be displayed with respect to the previous images. This system is highly accurate, and the results can be used to simulate the growth of biological cultures. The authors believe that the innovative features of this platform can be implemented, for example, in the food industry, clinical environments, and research laboratories.

  4. Field measurement of acid gases and soluble anions in atmospheric particulate matter using a parallel plate wet denuder and an alternating filter-based automated analysis system.

    PubMed

    Boring, C Bradley; Al-Horr, Rida; Genfa, Zhang; Dasgupta, Pumendu K; Martin, Michael W; Smith, William F

    2002-03-15

    We present a new fully automated instrument for the measurement of acid gases and soluble anionic constituents of atmospheric particulate matter. The instrument operates in two independent parallel channels. In one channel, a wet denuder collects soluble acid gases; these are analyzed by anion chromatography (IC). In a second channel, a cyclone removes large particles and the aerosol stream is then processed by another wet denuder to remove potentially interfering gases. The particles are then collected by one of two glass fiber filters which are alternately sampled, washed, and dried. The washings are preconcentrated and analyzed by IC. Detection limits of low to subnanogram per cubic meter concentrations of most gaseous and particulate constituents can be readily attained. The instrument has been extensively field-tested; some field data are presented. Results of attempts to decipher the total anionic constitution of urban ambient aerosol by IC-MS analysis are also presented.

  5. Semi-automated relative quantification of cell culture contamination with mycoplasma by Photoshop-based image analysis on immunofluorescence preparations.

    PubMed

    Kumar, Ashok; Yerneni, Lakshmana K

    2009-01-01

    Mycoplasma contamination in cell culture is a serious setback for the cell-culturist. The experiments undertaken using contaminated cell cultures are known to yield unreliable or false results due to various morphological, biochemical and genetic effects. Earlier surveys revealed incidences of mycoplasma contamination in cell cultures to range from 15 to 80%. Out of a vast array of methods for detecting mycoplasma in cell culture, the cytological methods directly demonstrate the contaminating organism present in association with the cultured cells. In this investigation, we report the adoption of a cytological immunofluorescence assay (IFA), in an attempt to obtain a semi-automated relative quantification of contamination by employing the user-friendly Photoshop-based image analysis. The study performed on 77 cell cultures randomly collected from various laboratories revealed mycoplasma contamination in 18 cell cultures simultaneously by IFA and Hoechst DNA fluorochrome staining methods. It was observed that the Photoshop-based image analysis on IFA stained slides was very valuable as a sensitive tool in providing quantitative assessment on the extent of contamination both per se and in comparison to cellularity of cell cultures. The technique could be useful in estimating the efficacy of anti-mycoplasma agents during decontaminating measures.

  6. Validation of the BacT/ALERT®3D automated culture system for the detection of microbial contamination of epithelial cell culture medium.

    PubMed

    Plantamura, E; Huyghe, G; Panterne, B; Delesalle, N; Thépot, A; Reverdy, M E; Damour, O; Auxenfans, Céline

    2012-08-01

    Living tissue engineering for regenerative therapy cannot withstand the usual pharmacopoeia methods of purification and terminal sterilization. Consequently, these products must be manufactured under aseptic conditions at microbiologically controlled environment facilities. This study was proposed to validate BacT/ALERT(®)3D automated culture system for microbiological control of epithelial cell culture medium (ECCM). Suspensions of the nine microorganisms recommended by the European Pharmacopoeia (Chap. 2.6.27: "Microbiological control of cellular products"), plus one species from oral mucosa and two negative controls with no microorganisms were prepared in ECCM. They were inoculated in FA (anaerobic) and SN (aerobic) culture bottles (Biomérieux, Lyon, France) and incubated in a BacT/ALERT(®)3D automated culture system. For each species, five sets of bottles were inoculated for reproducibility testing: one sample was incubated at the French Health Products Agency laboratory (reference) and the four others at Cell and Tissue Bank of Lyon, France. The specificity of the positive culture bottles was verified by Gram staining and then subcultured to identify the microorganism grown. The BacT/ALERT(®)3D system detected all the inoculated microorganisms in less than 2 days except Propionibacterium acnes which was detected in 3 days. In conclusion, this study demonstrates that the BacT/ALERT(®)3D system can detect both aerobic and anaerobic bacterial and fungal contamination of an epithelial cell culture medium consistent with the European Pharmacopoeia chapter 2.6.27 recommendations. It showed the specificity, sensitivity, and precision of the BacT/ALERT(®)3D method, since all the microorganisms seeded were detected in both sites and the uncontaminated medium ECCM remained negative at 7 days. PMID:22160810

  7. On-line automated sample preparation for liquid chromatography using parallel supported liquid membrane extraction and microporous membrane liquid-liquid extraction.

    PubMed

    Sandahl, Margareta; Mathiasson, Lennart; Jönsson, Jan Ake

    2002-10-25

    An automated system was developed for analysis of non-polar and polar ionisable compounds at trace levels in natural water. Sample work-up was performed in a flow system using two parallel membrane extraction units. This system was connected on-line to a reversed-phase HPLC system for final determination. One of the membrane units was used for supported liquid membrane (SLM) extraction, which is suitable for ionisable or permanently charged compounds. The other unit was used for microporous membrane liquid-liquid extraction (MMLLE) suitable for uncharged compounds. The fungicide thiophanate methyl and its polar metabolites carbendazim and 2-aminobenzimidazole were used as model compounds. The whole system was controlled by means of four syringe pumps. While extracting one part of the sample using the SLM technique. the extract from the MMLLE extraction was analysed and vice versa. This gave a total analysis time of 63 min for each sample resulting in a sample throughput of 22 samples per 24 h.

  8. Evaluation of a Fully Automated Research Prototype for the Immediate Identification of Microorganisms from Positive Blood Cultures under Clinical Conditions

    PubMed Central

    Hyman, Jay M.; Walsh, John D.; Ronsick, Christopher; Wilson, Mark; Hazen, Kevin C.; Borzhemskaya, Larisa; Link, John; Clay, Bradford; Ullery, Michael; Sanchez-Illan, Mirta; Rothenberg, Steven; Robinson, Ron; van Belkum, Alex

    2016-01-01

    ABSTRACT A clinical laboratory evaluation of an intrinsic fluorescence spectroscopy (IFS)-based identification system paired to a BacT/Alert Virtuo microbial detection system (bioMérieux, Inc., Durham, NC) was performed to assess the potential for fully automated identification of positive blood cultures. The prototype IFS system incorporates a novel method combining a simple microbial purification procedure with rapid in situ identification via spectroscopy. Results were available within 15 min of a bottle signaling positive and required no manual intervention. Among cultures positive for organisms contained within the database and producing acceptable spectra, 75 of 88 (85.2%) and 79 of 88 (89.8%) were correctly identified to the species and genus level, respectively. These results are similar to the performance of existing rapid methods. PMID:27094332

  9. Quantitative high-throughput population dynamics in continuous-culture by automated microscopy.

    PubMed

    Merritt, Jason; Kuehn, Seppe

    2016-01-01

    We present a high-throughput method to measure abundance dynamics in microbial communities sustained in continuous-culture. Our method uses custom epi-fluorescence microscopes to automatically image single cells drawn from a continuously-cultured population while precisely controlling culture conditions. For clonal populations of Escherichia coli our instrument reveals history-dependent resilience and growth rate dependent aggregation. PMID:27616752

  10. Quantitative high-throughput population dynamics in continuous-culture by automated microscopy

    PubMed Central

    Merritt, Jason; Kuehn, Seppe

    2016-01-01

    We present a high-throughput method to measure abundance dynamics in microbial communities sustained in continuous-culture. Our method uses custom epi-fluorescence microscopes to automatically image single cells drawn from a continuously-cultured population while precisely controlling culture conditions. For clonal populations of Escherichia coli our instrument reveals history-dependent resilience and growth rate dependent aggregation. PMID:27616752

  11. Blood culture examinations at a community hospital without a microbiology laboratory: using an automated blood culture system and performing a Gram stain on positive culture bottles in the institution.

    PubMed

    Saito, Takashi; Aoki, Yoji; Mori, Yoshihiro; Kohi, Fumikazu

    2004-08-01

    To elucidate the existence of microorganisms from blood culture bottles in hospitals without a microbiology laboratory, we changed the system of blood culture examinations. The Oxoid signal blood culture system and submission of all blood cultures to the clinical testing industry was used from July 2002 to December 2002 (first period). Use of the BacT/Alert system and performing of Gram stain for positive culture bottles in our institutions was conducted from January 2003 to June 2003 (latter period). A total of 210 and 193 blood cultures were processed during the first and latter periods, respectively. There were 40 (19.0%) positive cultures in the first period and 32 (16.6%) positive cultures in the latter period. The times from the specimen collection to the Gram stain result that were required were 3.8 and 1.0 days in the first period and the latter period, respectively. The times required for the final report of the blood cultures in the first period and in the latter period were 5.8 and 4.9 days, respectively. We conclude that using a continuous monitoring, automated blood culture system and performing Gram stain for positive culture bottles in institutions without microbiology laboratories may be useful for medical doctors to rapidly determine the existence of microorganisms and to begin adequate antiinfective therapy.

  12. An automated robotic platform for rapid profiling oligosaccharide analysis of monoclonal antibodies directly from cell culture.

    PubMed

    Doherty, Margaret; Bones, Jonathan; McLoughlin, Niaobh; Telford, Jayne E; Harmon, Bryan; DeFelippis, Michael R; Rudd, Pauline M

    2013-11-01

    Oligosaccharides attached to Asn297 in each of the CH2 domains of monoclonal antibodies play an important role in antibody effector functions by modulating the affinity of interaction with Fc receptors displayed on cells of the innate immune system. Rapid, detailed, and quantitative N-glycan analysis is required at all stages of bioprocess development to ensure the safety and efficacy of the therapeutic. The high sample numbers generated during quality by design (QbD) and process analytical technology (PAT) create a demand for high-performance, high-throughput analytical technologies for comprehensive oligosaccharide analysis. We have developed an automated 96-well plate-based sample preparation platform for high-throughput N-glycan analysis using a liquid handling robotic system. Complete process automation includes monoclonal antibody (mAb) purification directly from bioreactor media, glycan release, fluorescent labeling, purification, and subsequent ultra-performance liquid chromatography (UPLC) analysis. The entire sample preparation and commencement of analysis is achieved within a 5-h timeframe. The automated sample preparation platform can easily be interfaced with other downstream analytical technologies, including mass spectrometry (MS) and capillary electrophoresis (CE), for rapid characterization of oligosaccharides present on therapeutic antibodies.

  13. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    NASA Astrophysics Data System (ADS)

    Kim, Youn-Kyu; Park, Seul-Hyun; Lee, Joo-Hee; Choi, Gi-Hyuk

    2015-03-01

    In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS). The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  14. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    PubMed

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. PMID:24037521

  15. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    PubMed

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license.

  16. Fully-automated roller bottle handling system for large scale culture of mammalian cells.

    PubMed

    Kunitake, R; Suzuki, A; Ichihashi, H; Matsuda, S; Hirai, O; Morimoto, K

    1997-01-20

    A fully automatic and continuous cell culture system based on roller bottles is described in this paper. The system includes a culture rack storage station for storing a large number of roller bottles filled with culture medium and inoculated with mammalian cells, mass-handling facility for extracting completed cultures from the roller bottles, and replacing the culture medium. The various component units of the system were controlled either by a general-purpose programmable logic controller or a dedicated controller. The system provided four subsequent operation modes: cell inoculation, medium change, harvesting, and medium change. The operator could easily select and change the appropriate mode from outside of the aseptic area. The development of the system made large-scale production of mammalian cells, and manufacturing and stabilization of high quality products such as erythropoietin possible under total aseptic control, and opened up the door for industrial production of physiologically active substances as pharmaceutical drugs by mammalian cell culture.

  17. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    ERIC Educational Resources Information Center

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  18. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    PubMed Central

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-01

    Background Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at . This site also provides detailed instructions for

  19. Attempts to Automate the Process of Generation of Orthoimages of Objects of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Podlasiak, P.; Zawieska, D.

    2015-02-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. The orthoimage is a cartometric form of photographic presentation of information in the two-dimensional reference system. The paper will discuss the issue of automation of the orthoimage generation basing on the TLS data and digital images. At present attempts are made to apply modern technologies not only for the needs of surveys, but also during the data processing. This paper will present attempts aiming at utilisation of appropriate algorithms and the author's application for automatic generation of the projection plane, for the needs of acquisition of intensity orthoimages from the TLS data. Such planes are defined manually in the majority of popular TLS data processing applications. A separate issue related to the RGB image generation is the orientation of digital images in relation to scans. It is important, in particular in such cases when scans and photographs are not taken simultaneously. This paper will present experiments concerning the utilisation of the SIFT algorithm for automatic matching of intensity orthoimages of the intensity and digital (RGB) photographs. Satisfactory results of the process of automation, as well as in relation to the quality of resulting orthoimages have been obtained.

  20. Developmental downregulation of GABAergic drive parallels formation of functional synapses in cultured mouse neocortical networks.

    PubMed

    Klueva, Julia; Meis, Susanne; de Lima, Ana D; Voigt, Thomas; Munsch, Thomas

    2008-06-01

    Networks of cortical neurons in vitro spontaneously develop synchronous oscillatory electrical activity at around the second week in culture. However, the underlying mechanisms and in particular the role of GABAergic interneurons in initiation and synchronization of oscillatory activity in developing cortical networks remain elusive. Here, we examined the intrinsic properties and the development of GABAergic and glutamatergic input onto presumed projection neurons (PNs) and large interneurons (L-INs) in cortical cultures of GAD67-GFP mice. Cultures developed spontaneous synchronous activity already at 5-7 days in vitro (DIV), as revealed by imaging transient changes in Fluo-3 fluorescence. Concurrently, spontaneous glutamate-mediated and GABA(A)-mediated postsynaptic currents (sPSCs) occured at 5 DIV. For both types of neurons the frequency of glutamatergic and GABAergic sPSCs increased with DIV, whereas the charge transfer of glutamatergic sPSCs increased and the charge transfer of GABAergic sPSCs decreased with cultivation time. The ratio between GABAergic and the overall charge transfer was significantly reduced with DIV for L-INs and PNs, indicating an overall reduction in GABAergic synaptic drive with maturation of the network. In contrast, analysis of miniature PSCs (mPSCs) revealed no significant changes of charge transfer with DIV for both types of neurons, indicating that the reduction in GABAergic drive was not due to a decreased number of functional synapses. Our data suggest that the global reduction in GABAergic synaptic drive together with more synaptic input to PNs and L-INs during maturation may enhance rhythmogenesis of the network and increase the synchronization at the level of population bursts. PMID:18361402

  1. PROJECT FOR AN AUTOMATED PRIMARY-GRADE READING AND ARITHMETIC CURRICULUM FOR CULTURALLY-DEPRIVED CHILDREN. PROGRESS REPORT NUMBER 5, JULY 1 TO DECEMBER 31, 1966.

    ERIC Educational Resources Information Center

    ATKINSON, RICHARD C.; SUPPES, PATRICK

    THIS REPORT ON THE PROGRESS OF THE IBM 1800/1500 CAI SYSTEM, AN AUTOMATED READING AND ARITHMETIC CURRICULUM FOR CULTURALLY DEPRIVED CHILDREN IN THE PRIMARY GRADES, DISCUSSES THE PROBLEMS INVOLVED IN GETTING THE SYSTEM INTO OPERATION IN THE BRENTWOOD SCHOOL IN STANFORD, CALIF. THE OPERATIONAL FEATURES OF THIS IBM SYSTEM AND THE METHODS BY WHICH THE…

  2. Automated Method for the Rapid and Precise Estimation of Adherent Cell Culture Characteristics from Phase Contrast Microscopy Images

    PubMed Central

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-01-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521

  3. Short report: Failure of Burkholderia pseudomallei to grow in an automated blood culture system.

    PubMed

    Teerawattanasook, Nittaya; Limmathurotsakul, Direk; Day, Nicholas P J; Wuthiekanun, Vanaporn

    2014-12-01

    We compared the organisms isolated from 30,210 pairs of blood culture bottles by using BacT/Alert system and the conventional system. Overall, 2,575 (8.5%) specimens were culture positive for pathogenic organisms. The sensitivity for detection of pathogenic organisms with the BACT/Alert system (85.6%, 2,203 of 2,575) was significantly higher than that with the conventional method (74.1%, 1,908 of 2,575; P < 0.0001). However, Burkholderia pseudomallei was isolated less often with the BacT/ALERT system (73.5%, 328 of 446) than with the conventional system (90.3%, 403 of 446; P < 0.0001). This finding suggests that use of the conventional culture method in conjunction with the BacT/Alert system may improve the isolation rate for B. pseudomallei in melioidosis-endemic areas.

  4. High-Throughput, Automated Protein A Purification Platform with Multiattribute LC-MS Analysis for Advanced Cell Culture Process Monitoring.

    PubMed

    Dong, Jia; Migliore, Nicole; Mehrman, Steven J; Cunningham, John; Lewis, Michael J; Hu, Ping

    2016-09-01

    The levels of many product related variants observed during the production of monoclonal antibodies are dependent on control of the manufacturing process, especially the cell culture process. However, it is difficult to characterize samples pulled from the bioreactor due to the low levels of product during the early stages of the process and the high levels of interfering reagents. Furthermore, analytical results are often not available for several days, which slows the process development cycle and prevents "real time" adjustments to the manufacturing process. To reduce the delay and enhance our ability to achieve quality targets, we have developed a low-volume, high-throughput, and high-content analytical platform for at-line product quality analysis. This workflow includes an automated, 96-well plate protein A purification step to isolate antibody product from the cell culture fermentation broth, followed by rapid, multiattribute LC-MS analysis. We have demonstrated quantitative correlations between particular process parameters with the levels of glycosylated and glycated species in a series of small scale experiments, but the platform could be used to monitor other attributes and applied across the biopharmaceutical industry. PMID:27487007

  5. Automated analysis of food-borne pathogens using a novel microbial cell culture, sensing and classification system.

    PubMed

    Xiang, Kun; Li, Yinglei; Ford, William; Land, Walker; Schaffer, J David; Congdon, Robert; Zhang, Jing; Sadik, Omowunmi

    2016-02-21

    We hereby report the design and implementation of an Autonomous Microbial Cell Culture and Classification (AMC(3)) system for rapid detection of food pathogens. Traditional food testing methods require multistep procedures and long incubation period, and are thus prone to human error. AMC(3) introduces a "one click approach" to the detection and classification of pathogenic bacteria. Once the cultured materials are prepared, all operations are automatic. AMC(3) is an integrated sensor array platform in a microbial fuel cell system composed of a multi-potentiostat, an automated data collection system (Python program, Yocto Maxi-coupler electromechanical relay module) and a powerful classification program. The classification scheme consists of Probabilistic Neural Network (PNN), Support Vector Machines (SVM) and General Regression Neural Network (GRNN) oracle-based system. Differential Pulse Voltammetry (DPV) is performed on standard samples or unknown samples. Then, using preset feature extractions and quality control, accepted data are analyzed by the intelligent classification system. In a typical use, thirty-two extracted features were analyzed to correctly classify the following pathogens: Escherichia coli ATCC#25922, Escherichia coli ATCC#11775, and Staphylococcus epidermidis ATCC#12228. 85.4% accuracy range was recorded for unknown samples, and within a shorter time period than the industry standard of 24 hours. PMID:26818563

  6. Evaluation of the Paratrend Multi-Analyte Sensor for Potential Utilization in Long-Duration Automated Cell Culture Monitoring

    NASA Technical Reports Server (NTRS)

    Hwang, Emma Y.; Pappas, Dimitri; Jeevarajan, Antony S.; Anderson, Melody M.

    2004-01-01

    BACKGROUND: Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments. While several single-analyte sensors exist to measure culture health, a multi-analyte sensor would simplify the cell culture system. One such multi-analyte sensor, the Paratrend 7 manufactured by Diametrics Medical, consists of three optical fibers for measuring pH, dissolved carbon dioxide (pCO(2)), dissolved oxygen (pO(2)), and a thermocouple to measure temperature. The sensor bundle was designed for intra-vascular measurements in clinical settings, and can be used in bioreactors operated both on the ground and in NASA's Space Shuttle and International Space Station (ISS) experiments. METHODS: A Paratrend 7 sensor was placed at the outlet of a bioreactor inoculated with BHK-21 (baby hamster kidney) cells. The pH, pCO(2), pO(2), and temperature data were transferred continuously to an external computer. Cell culture medium, manually extracted from the bioreactor through a sampling port, was also assayed using a bench top blood gas analyzer (BGA). RESULTS: Two Paratrend 7 sensors were used over a single cell culture experiment (64 days). When compared to the manually obtained BGA samples, the sensor had good agreement for pH, pCO(2), and pO(2) with bias (and precision) 0.005(0.024), 8.0 mmHg (4.4 mmHg), and 11 mmHg (17 mmHg), respectively for the first two sensors. A third Paratrend sensor (operated for 141 days) had similar agreement (0.02+/-0.15 for pH, -4+/-8 mm Hg for pCO(2), and 24+/-18 mmHg for pO(2)). CONCLUSION: The resulting biases and precisions are com- parable to Paratrend sensor clinical results. Although the pO(2) differences may be acceptable for clinically relevant measurement ranges, the O(2) sensor in this bundle may not be reliable enough for the ranges of pO(2) in these cell culture studies without periodic calibration.

  7. Trypanosoma cruzi infectivity assessment in "in vitro" culture systems by automated cell counting.

    PubMed

    Liempi, Ana; Castillo, Christian; Cerda, Mauricio; Droguett, Daniel; Duaso, Juan; Barahona, Katherine; Hernández, Ariane; Díaz-Luján, Cintia; Fretes, Ricardo; Härtel, Steffen; Kemmerling, Ulrike

    2015-03-01

    Chagas disease is an endemic, neglected tropical disease in Latin America that is caused by the protozoan parasite Trypanosoma cruzi. In vitro models constitute the first experimental approach to study the physiopathology of the disease and to assay potential new trypanocidal agents. Here, we report and describe clearly the use of commercial software (MATLAB(®)) to quantify T. cruzi amastigotes and infected mammalian cells (BeWo) and compared this analysis with the manual one. There was no statistically significant difference between the manual and the automatic quantification of the parasite; the two methods showed a correlation analysis r(2) value of 0.9159. The most significant advantage of the automatic quantification was the efficiency of the analysis. The drawback of this automated cell counting method was that some parasites were assigned to the wrong BeWo cell, however this data did not exceed 5% when adequate experimental conditions were chosen. We conclude that this quantification method constitutes an excellent tool for evaluating the parasite load in cells and therefore constitutes an easy and reliable ways to study parasite infectivity. PMID:25553972

  8. Trypanosoma cruzi infectivity assessment in "in vitro" culture systems by automated cell counting.

    PubMed

    Liempi, Ana; Castillo, Christian; Cerda, Mauricio; Droguett, Daniel; Duaso, Juan; Barahona, Katherine; Hernández, Ariane; Díaz-Luján, Cintia; Fretes, Ricardo; Härtel, Steffen; Kemmerling, Ulrike

    2015-03-01

    Chagas disease is an endemic, neglected tropical disease in Latin America that is caused by the protozoan parasite Trypanosoma cruzi. In vitro models constitute the first experimental approach to study the physiopathology of the disease and to assay potential new trypanocidal agents. Here, we report and describe clearly the use of commercial software (MATLAB(®)) to quantify T. cruzi amastigotes and infected mammalian cells (BeWo) and compared this analysis with the manual one. There was no statistically significant difference between the manual and the automatic quantification of the parasite; the two methods showed a correlation analysis r(2) value of 0.9159. The most significant advantage of the automatic quantification was the efficiency of the analysis. The drawback of this automated cell counting method was that some parasites were assigned to the wrong BeWo cell, however this data did not exceed 5% when adequate experimental conditions were chosen. We conclude that this quantification method constitutes an excellent tool for evaluating the parasite load in cells and therefore constitutes an easy and reliable ways to study parasite infectivity.

  9. Characterization and classification of adherent cells in monolayer culture using automated tracking and evolutionary algorithms.

    PubMed

    Zhang, Zhen; Bedder, Matthew; Smith, Stephen L; Walker, Dawn; Shabir, Saqib; Southgate, Jennifer

    2016-08-01

    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24h period. Subsequent analysis following feature extraction demonstrated the ability of the technique to successfully separate the modulated classes of cell using evolutionary algorithms. Specifically, a Cartesian Genetic Program (CGP) network was evolved that identified average migration speed, in-contact angular velocity, cohesivity and average cell clump size as the principal features contributing to the separation. Our approach not only provides non-biased and parsimonious insight into modulated class behaviours, but can be extracted as mathematical formulae for the parameterization of computational models. PMID:27267455

  10. Comparative evaluation of the role of single and multiple blood specimens in the outcome of blood cultures using BacT/ALERT 3D (automated) blood culture system in a tertiary care hospital

    PubMed Central

    Elantamilan, D.; Lyngdoh, Valarie Wihiwot; Khyriem, Annie B.; Rajbongshi, Jyotismita; Bora, Ishani; Devi, Surbala Thingujam; Bhattacharyya, Prithwis; Barman, Himesh

    2016-01-01

    Introduction: Bloodstream infection (BSI) is a leading cause of mortality in critically ill patients. The mortality directly attributable to BSI has been estimated to be around 16% and 40% in general hospital population and Intensive Care Unit (ICU) population, respectively. The detection rate of these infections increases with the number of blood samples obtained for culture. The newer continuous monitoring automated blood culture systems with enhanced culture media show increased yield and sensitivity. Hence, we aimed at studying the role of single and multiple blood specimens from different sites at the same time in the outcome of automated blood culture system. Materials and Methods and Results: A total of 1054 blood culture sets were analyzed over 1 year, the sensitivity of one, two, and three samples in a set was found to be 85.67%, 96.59%, and 100%, respectively, which showed a statistically significant difference (P < 0.0001). Similar findings were seen in few more studies, however, among individual organisms in contrast to other studies, the isolation rates of Gram-positive bacteria were less than that of Gram-negative Bacilli with one (or first) sample in a blood culture set. In our study, despite using BacT/ALERT three-dimensional continuous culture monitoring system with FAN plus culture bottles, 15% of positive cultures would have been missed if only a single sample was collected in a blood culture set. Conclusion: The variables like the volume of blood and number of samples collected from different sites still play a major role in the outcome of these automated blood culture systems.

  11. Automated Voxel Model from Point Clouds for Structural Analysis of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Bitelli, G.; Castellazzi, G.; D'Altri, A. M.; De Miranda, S.; Lambertini, A.; Selvaggi, I.

    2016-06-01

    In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM) of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy) that was hit by an earthquake in 2012.

  12. Reductions in self-reported stress and anticipatory heart rate with the use of a semi-automated parallel parking system.

    PubMed

    Reimer, Bryan; Mehler, Bruce; Coughlin, Joseph F

    2016-01-01

    Drivers' reactions to a semi-autonomous technology for assisted parallel parking system were evaluated in a field experiment. A sample of 42 drivers balanced by gender and across three age groups (20-29, 40-49, 60-69) were given a comprehensive briefing, saw the technology demonstrated, practiced parallel parking 3 times each with and without the assistive technology, and then were assessed on an additional 3 parking events each with and without the technology. Anticipatory stress, as measured by heart rate, was significantly lower when drivers approached a parking space knowing that they would be using the assistive technology as opposed to manually parking. Self-reported stress levels following assisted parks were also lower. Thus, both subjective and objective data support the position that the assistive technology reduced stress levels in drivers who were given detailed training. It was observed that drivers decreased their use of turn signals when using the semi-autonomous technology, raising a caution concerning unintended lapses in safe driving behaviors that may occur when assistive technologies are used.

  13. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    PubMed

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data.

  14. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data

    PubMed Central

    Katta, Mohan A. V. S. K.; Khan, Aamir W.; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K.

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data. PMID:26460497

  15. Toward fully automated high performance computing drug discovery: a massively parallel virtual screening pipeline for docking and molecular mechanics/generalized Born surface area rescoring to improve enrichment.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2014-01-27

    In this work we announce and evaluate a high throughput virtual screening pipeline for in-silico screening of virtual compound databases using high performance computing (HPC). Notable features of this pipeline are an automated receptor preparation scheme with unsupervised binding site identification. The pipeline includes receptor/target preparation, ligand preparation, VinaLC docking calculation, and molecular mechanics/generalized Born surface area (MM/GBSA) rescoring using the GB model by Onufriev and co-workers [J. Chem. Theory Comput. 2007, 3, 156-169]. Furthermore, we leverage HPC resources to perform an unprecedented, comprehensive evaluation of MM/GBSA rescoring when applied to the DUD-E data set (Directory of Useful Decoys: Enhanced), in which we selected 38 protein targets and a total of ∼0.7 million actives and decoys. The computer wall time for virtual screening has been reduced drastically on HPC machines, which increases the feasibility of extremely large ligand database screening with more accurate methods. HPC resources allowed us to rescore 20 poses per compound and evaluate the optimal number of poses to rescore. We find that keeping 5-10 poses is a good compromise between accuracy and computational expense. Overall the results demonstrate that MM/GBSA rescoring has higher average receiver operating characteristic (ROC) area under curve (AUC) values and consistently better early recovery of actives than Vina docking alone. Specifically, the enrichment performance is target-dependent. MM/GBSA rescoring significantly out performs Vina docking for the folate enzymes, kinases, and several other enzymes. The more accurate energy function and solvation terms of the MM/GBSA method allow MM/GBSA to achieve better enrichment, but the rescoring is still limited by the docking method to generate the poses with the correct binding modes. PMID:24358939

  16. Toward fully automated high performance computing drug discovery: a massively parallel virtual screening pipeline for docking and molecular mechanics/generalized Born surface area rescoring to improve enrichment.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2014-01-27

    In this work we announce and evaluate a high throughput virtual screening pipeline for in-silico screening of virtual compound databases using high performance computing (HPC). Notable features of this pipeline are an automated receptor preparation scheme with unsupervised binding site identification. The pipeline includes receptor/target preparation, ligand preparation, VinaLC docking calculation, and molecular mechanics/generalized Born surface area (MM/GBSA) rescoring using the GB model by Onufriev and co-workers [J. Chem. Theory Comput. 2007, 3, 156-169]. Furthermore, we leverage HPC resources to perform an unprecedented, comprehensive evaluation of MM/GBSA rescoring when applied to the DUD-E data set (Directory of Useful Decoys: Enhanced), in which we selected 38 protein targets and a total of ∼0.7 million actives and decoys. The computer wall time for virtual screening has been reduced drastically on HPC machines, which increases the feasibility of extremely large ligand database screening with more accurate methods. HPC resources allowed us to rescore 20 poses per compound and evaluate the optimal number of poses to rescore. We find that keeping 5-10 poses is a good compromise between accuracy and computational expense. Overall the results demonstrate that MM/GBSA rescoring has higher average receiver operating characteristic (ROC) area under curve (AUC) values and consistently better early recovery of actives than Vina docking alone. Specifically, the enrichment performance is target-dependent. MM/GBSA rescoring significantly out performs Vina docking for the folate enzymes, kinases, and several other enzymes. The more accurate energy function and solvation terms of the MM/GBSA method allow MM/GBSA to achieve better enrichment, but the rescoring is still limited by the docking method to generate the poses with the correct binding modes.

  17. Systematic review automation technologies.

    PubMed

    Tsafnat, Guy; Glasziou, Paul; Choong, Miew Keen; Dunn, Adam; Galgani, Filippo; Coiera, Enrico

    2014-07-09

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.

  18. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  19. Capillary electrophoresis for automated on-line monitoring of suspension cultures: Correlating cell density, nutrients and metabolites in near real-time.

    PubMed

    Alhusban, Ala A; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M

    2016-05-12

    Increasingly stringent demands on the production of biopharmaceuticals demand monitoring of process parameters that impact on their quality. We developed an automated platform for on-line, near real-time monitoring of suspension cultures by integrating microfluidic components for cell counting and filtration with a high-resolution separation technique. This enabled the correlation of the growth of a human lymphocyte cell line with changes in the essential metabolic markers, glucose, glutamine, leucine/isoleucine and lactate, determined by Sequential Injection-Capillary Electrophoresis (SI-CE). Using 8.1 mL of media (41 μL per run), the metabolic status and cell density were recorded every 30 min over 4 days. The presented platform is flexible, simple and automated and allows for fast, robust and sensitive analysis with low sample consumption and high sample throughput. It is compatible with up- and out-scaling, and as such provides a promising new solution to meet the future demands in process monitoring in the biopharmaceutical industry. PMID:27114228

  20. A semi-automated system for the assessment of toxicity to cultured mammalian cells based on detection of changes in staining properties.

    PubMed

    Barer, M R; Mann, G F; Drasar, B S

    1986-01-01

    We have established a semi-automated microtiter-based system for the quantification of dye binding to cultured eukaryotic cells. This system has been applied to the quantitation of toxic activities that disrupt cell monolayers and their neutralization. We have used this background as a basis for developing a detection and characterization system for activities that do not cause such gross toxicity. A prototype system has been established based on three staining procedures which in broad terms assess cellular dehydrogenase activity, and protein, DNA, and RNA content. The activity of several agents affecting cyclic nucleotide metabolism, including cholera toxin, on the staining properties of exposed monolayers has been assessed. Several new categories of cellular response are readily discernible in this latter system indicating that biological activities may be identified on the basis of the pattern of such responses. Since microtiter based systems show considerable potential for automation, it is suggested that the further development of this approach could offer a realistic prospect for numerous forms of toxicity testing on an industrial scale.

  1. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  2. Spheroid formation of human thyroid cancer cells in an automated culturing system during the Shenzhou-8 Space mission.

    PubMed

    Pietsch, Jessica; Ma, Xiao; Wehland, Markus; Aleshcheva, Ganna; Schwarzwälder, Achim; Segerer, Jürgen; Birlem, Maria; Horn, Astrid; Bauer, Johann; Infanger, Manfred; Grimm, Daniela

    2013-10-01

    Human follicular thyroid cancer cells were cultured in Space to investigate the impact of microgravity on 3D growth. For this purpose, we designed and constructed a cell container that can endure enhanced physical forces, is connected to fluid storage chambers, performs media changes and cell harvesting automatically and supports cell viability. The container consists of a cell suspension chamber, two reserve tanks for medium and fixative and a pump for fluid exchange. The selected materials proved durable, non-cytotoxic, and did not inactivate RNAlater. This container was operated automatically during the unmanned Shenzhou-8 Space mission. FTC-133 human follicular thyroid cancer cells were cultured in Space for 10 days. Culture medium was exchanged after 5 days in Space and the cells were fixed after 10 days. The experiment revealed a scaffold-free formation of extraordinary large three-dimensional aggregates by thyroid cancer cells with altered expression of EGF and CTGF genes under real microgravity.

  3. Evaluation of the 3D BacT/ALERT automated culture system for the detection of microbial contamination of platelet concentrates.

    PubMed

    McDonald, C P; Rogers, A; Cox, M; Smith, R; Roy, A; Robbins, S; Hartley, S; Barbara, J A J; Rothenberg, S; Stutzman, L; Widders, G

    2002-10-01

    Bacterial transmission remains the major component of morbidity and mortality associated with transfusion-transmitted infections. Platelet concentrates are the most common cause of bacterial transmission. The BacT/ALERT 3D automated blood culture system has the potential to screen platelet concentrates for the presence of bacteria. Evaluation of this system was performed by spiking day 2 apheresis platelet units with individual bacterial isolates at final concentrations of 10 and 100 colony-forming units (cfu) mL-1. Fifteen organisms were used which had been cited in platelet transmission and monitoring studies. BacT/ALERT times to detection were compared with thioglycollate broth cultures, and the performance of five types of BacT/ALERT culture bottles was evaluated. Sampling was performed immediately after the inoculation of the units, and 10 replicates were performed per organism concentration for each of the five types of BacT/ALERT bottles. The mean times for the detection of these 15 organisms by BacT/ALERT, with the exception of Propionibacterium acnes, ranged from 9.1 to 48.1 h (all 10 replicates were positive). In comparison, the time range found using thioglycollate was 12.0-32.3 h (all 10 replicates were positive). P. acnes' BacT/ALERT mean detection times ranged from 89.0 to 177.6 h compared with 75.6-86.4 h for the thioglycollate broth. BacT/ALERT, with the exception of P. acnes, which has dubious clinical significance, gave equivalent or shorter detection times when compared with the thioglycollate broth system. The BacT/ALERT system detected a range of organisms at levels of 10 and 100 cfu mL-1. This study validates the BacT/ALERT microbial detection system for screening platelets. Currently, the system is the only practically viable option available for routinely screening platelet concentrates to prevent bacterial transmission.

  4. Evaluation of the 3D BacT/ALERT automated culture system for the detection of microbial contamination of platelet concentrates.

    PubMed

    McDonald, C P; Rogers, A; Cox, M; Smith, R; Roy, A; Robbins, S; Hartley, S; Barbara, J A J; Rothenberg, S; Stutzman, L; Widders, G

    2002-10-01

    Bacterial transmission remains the major component of morbidity and mortality associated with transfusion-transmitted infections. Platelet concentrates are the most common cause of bacterial transmission. The BacT/ALERT 3D automated blood culture system has the potential to screen platelet concentrates for the presence of bacteria. Evaluation of this system was performed by spiking day 2 apheresis platelet units with individual bacterial isolates at final concentrations of 10 and 100 colony-forming units (cfu) mL-1. Fifteen organisms were used which had been cited in platelet transmission and monitoring studies. BacT/ALERT times to detection were compared with thioglycollate broth cultures, and the performance of five types of BacT/ALERT culture bottles was evaluated. Sampling was performed immediately after the inoculation of the units, and 10 replicates were performed per organism concentration for each of the five types of BacT/ALERT bottles. The mean times for the detection of these 15 organisms by BacT/ALERT, with the exception of Propionibacterium acnes, ranged from 9.1 to 48.1 h (all 10 replicates were positive). In comparison, the time range found using thioglycollate was 12.0-32.3 h (all 10 replicates were positive). P. acnes' BacT/ALERT mean detection times ranged from 89.0 to 177.6 h compared with 75.6-86.4 h for the thioglycollate broth. BacT/ALERT, with the exception of P. acnes, which has dubious clinical significance, gave equivalent or shorter detection times when compared with the thioglycollate broth system. The BacT/ALERT system detected a range of organisms at levels of 10 and 100 cfu mL-1. This study validates the BacT/ALERT microbial detection system for screening platelets. Currently, the system is the only practically viable option available for routinely screening platelet concentrates to prevent bacterial transmission. PMID:12383336

  5. Cockpit automation

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1988-01-01

    The aims and methods of aircraft cockpit automation are reviewed from a human-factors perspective. Consideration is given to the mixed pilot reception of increased automation, government concern with the safety and reliability of highly automated aircraft, the formal definition of automation, and the ground-proximity warning system and accidents involving controlled flight into terrain. The factors motivating automation include technology availability; safety; economy, reliability, and maintenance; workload reduction and two-pilot certification; more accurate maneuvering and navigation; display flexibility; economy of cockpit space; and military requirements.

  6. Automated Testing System

    2006-05-09

    ATS is a Python-language program for automating test suites for software programs that do not interact with thier users, such as scripted scientific simulations. ATS features a decentralized approach especially suited to larger projects. In its multinode mode it can utilize many nodes of a cluster in order to do many test in parallel. It has features for submitting longer-running tests to a batch system and would have to be customized for use elsewhere.

  7. Automated Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Pahle, Joseph; Brown, Nelson

    2015-01-01

    This presentation is an overview of the Automated Cooperative Trajectories project. An introduction to the phenomena of wake vortices is given, along with a summary of past research into the possibility of extracting energy from the wake by flying close parallel trajectories. Challenges and barriers to adoption of civilian automatic wake surfing technology are identified. A hardware-in-the-loop simulation is described that will support future research. Finally, a roadmap for future research and technology transition is proposed.

  8. Comparison of automated BAX PCR and standard culture methods for detection of Listeria monocytogenes in blue Crabmeat (Callinectus sapidus) and blue crab processing plants.

    PubMed

    Pagadala, Sivaranjani; Parveen, Salina; Schwarz, Jurgen G; Rippen, Thomas; Luchansky, John B

    2011-11-01

    This study compared the automated BAX PCR with the standard culture method (SCM) to detect Listeria monocytogenes in blue crab processing plants. Raw crabs, crabmeat, and environmental sponge samples were collected monthly from seven processing plants during the plant operating season, May through November 2006. For detection of L. monocytogenes in raw crabs and crabmeat, enrichment was performed in Listeria enrichment broth, whereas for environmental samples, demi-Fraser broth was used, and then plating on both Oxford agar and L. monocytogenes plating medium was done. Enriched samples were also analyzed by BAX PCR. A total of 960 samples were examined; 59 were positive by BAX PCR and 43 by SCM. Overall, there was no significant difference (P ≤ 0.05) between the methods for detecting the presence of L. monocytogenes in samples collected from crab processing plants. Twenty-two and 18 raw crab samples were positive for L. monocytogenes by SCM and BAX PCR, respectively. Twenty and 32 environmental samples were positive for L. monocytogenes by SCM and BAX PCR, respectively, whereas only one and nine finished products were positive. The sensitivities of BAX PCR for detecting L. monocytogenes in raw crabs, crabmeat, and environmental samples were 59.1, 100, and 60%, respectively. The results of this study indicate that BAX PCR is as sensitive as SCM for detecting L. monocytogenes in crabmeat, but more sensitive than SCM for detecting this bacterium in raw crabs and environmental samples.

  9. The simultaneous release by bone explants in culture and the parallel activation of procollagenase and of a latent neutral proteinase that degrades cartilage proteoglycans and denatured collagen.

    PubMed Central

    Vaes, G; Eeckhout, Y; Lenaers-Claeys, G; François-Gillet, C; Druetz, J E

    1978-01-01

    1. A latent neutral proteinase was found in culture media of mouse bone explants. Its accumulation during the cultures is closely parallel to that of procollagenase; both require the presence of heparin in the media. 2. Latent neutral proteinase was activated by several treatments of the media known to activate procollagenase, such as limited proteolysis by trypsin, chymotrypsin, plasmin or kallikrein, dialysis against 3 M-NaSCN at 4 degrees C and prolonged preincubation at 25 degrees C. Its activation often followed that of the procollagenase present in the same media. 3. Activation of neutral proteinase (as does that of procollagenase) by trypsin or plasmin involved two successive steps: the activation of a latent endogenous activator present in the media followed by the activation of neutral proteinase itself by that activator. 4. The proteinase degrades cartilage proteoglycans, denatured collagen (Azocoll) and casein at neutral pH; it is inhibited by EDTA, cysteine or serum. Collagenase is not inhibited by casein or Azocoll and is less resistant to heat or to trypsin than is the proteinase. Partial separation of the two enzymes was achieved by gel filtration of the media but not by fractional (NH4)2SO4 precipitation, by ion exchange or by affinity chromatography on Sepharose-collagen. These fractionations did not activate latent enzymes. 5. Trypsin activation decreases the molecular weight of both latent enzymes (60 000-70 000) by 20 000-30 000, as determined by gel filtration of media after removal of heparin. 6. The latency of both enzymes could be due either to a zymogen or to an enzyme-inhibitor complex. A thermostable inhibitor of both enzymes was found in some media. However, combinations of either enzyme with that inhibitor were not reactivated by trypsin, indicating that this inhibitor is unlikely to be the cause of the latency. PMID:208518

  10. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  11. Evaluation of 3 automated real-time PCR (Xpert C. difficile assay, BD MAX Cdiff, and IMDx C. difficile for Abbott m2000 assay) for detecting Clostridium difficile toxin gene compared to toxigenic culture in stool specimens.

    PubMed

    Yoo, Jaeeun; Lee, Hyeyoung; Park, Kang Gyun; Lee, Gun Dong; Park, Yong Gyu; Park, Yeon-Joon

    2015-09-01

    We evaluated the performance of the 3 automated systems (Cepheid Xpert, BD MAX, and IMDx C. difficile for Abbott m2000) detecting Clostridium difficile toxin gene compared to toxigenic culture. Of the 254 stool specimens tested, 87 (48 slight, 35 moderate, and 4 heavy growth) were toxigenic culture positive. The overall sensitivities and specificities were 82.8% and 98.8% for Xpert, 81.6% and 95.8% for BD MAX, and 62.1% and 99.4% for IMDx, respectively. The specificity was significantly higher in IMDx than BD MAX (P= 0.03). All stool samples underwent toxin A/B enzyme immunoassay testing, and of the 254 samples, only 29 samples were positive and 2 of them were toxigenic culture negative. Considering the rapidity and high specificity of the real-time PCR assays compared to the toxigenic culture, they can be used as the first test method for C. difficile infection/colonization.

  12. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  13. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  14. Process automation

    SciTech Connect

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs.

  15. Parallel stimulation of ACTH, beta-LPH + beta-endorphin and alpha-MSH release by alpha-adrenergic agents in rat anterior pituitary cells in culture.

    PubMed

    Raymond, V; Lépine, J; Giguère, V; Lissitzky, J C; Côté, J; Labrie, F

    1981-06-01

    Characteristics of the alpha-adrenergic stimulation of ACTH, beta-endorphin + beta-LPH and alpha-MSH release were studied in rat anterior pituitary cells in primary culture. Parallel changes of ACTH, beta-endorphin + beta-LPH and alpha-MSh release were found under all stimulatory and inhibitory conditions by natural and synthetic catecholamine agonists and antagonists. (-)Epinephrine and (-)norepinephrine lead to a 8--10-fold stimulation of peptide release at ED50 values of 20 and 90 nM, respectively. The stereoselectivity of the alpha-adrenergic stimulatory action on peptide release is indicated by a 100-fold higher activity of (-)- than (+)norepinephrine while (-)epinephrine is 10 times more potent than the corresponding (+) stereoisomer. The involvement of a typical alpha-adrenergic mechanism in the control of release of ACTH, beta-endorphin and related peptides in rat anterior pituitary gland is indicated by the following order of potency of a series of catecholaminergic agents (ED50 values): (-)epinephrine (20 nM) greater than (-)norepinephrine (90 nm) greater than phenylephrine (400 nM) greater than isoproterenol (6000 nM). The stimulatory effect of (-)epinephrine or phenylephrine is completely reversed by low concentrations of the alpha-adrenergic antagonist phentolamine while the beta-adrenergic antagonist propranolol has no effect up to 10 muM. Beside providing an easily accessible pure population of post-synaptic alpha-adrenergic receptors having potential applications as a model for other less accessible alpha-adrenergic brain systems, the present data suggest the possibility of the direct involvement of a catecholamine in the physiological control of ACTH secretion in the rat anterior pituitary gland.

  16. Investigating the feasibility of scale up and automation of human induced pluripotent stem cells cultured in aggregates in feeder free conditions.

    PubMed

    Soares, Filipa A C; Chandra, Amit; Thomas, Robert J; Pedersen, Roger A; Vallier, Ludovic; Williams, David J

    2014-03-10

    The transfer of a laboratory process into a manufacturing facility is one of the most critical steps required for the large scale production of cell-based therapy products. This study describes the first published protocol for scalable automated expansion of human induced pluripotent stem cell lines growing in aggregates in feeder-free and chemically defined medium. Cells were successfully transferred between different sites representative of research and manufacturing settings; and passaged manually and using the CompacT SelecT automation platform. Modified protocols were developed for the automated system and the management of cells aggregates (clumps) was identified as the critical step. Cellular morphology, pluripotency gene expression and differentiation into the three germ layers have been used compare the outcomes of manual and automated processes.

  17. Automated Microbial Metabolism Laboratory

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Development of the automated microbial metabolism laboratory (AMML) concept is reported. The focus of effort of AMML was on the advanced labeled release experiment. Labeled substrates, inhibitors, and temperatures were investigated to establish a comparative biochemical profile. Profiles at three time intervals on soil and pure cultures of bacteria isolated from soil were prepared to establish a complete library. The development of a strategy for the return of a soil sample from Mars is also reported.

  18. Culture.

    ERIC Educational Resources Information Center

    1997

    Twelve conference papers on cultural aspects of second language instruction include: "Towards True Multiculturalism: Ideas for Teachers" (Brian McVeigh); Comparing Cultures Through Critical Thinking: Development and Interpretations of Meaningful Observations" (Laurel D. Kamada); "Authority and Individualism in Japan and the USA" (Alisa Woodring);…

  19. Automated High Throughput Drug Target Crystallography

    SciTech Connect

    Rupp, B

    2005-02-18

    The molecular structures of drug target proteins and receptors form the basis for 'rational' or structure guided drug design. The majority of target structures are experimentally determined by protein X-ray crystallography, which as evolved into a highly automated, high throughput drug discovery and screening tool. Process automation has accelerated tasks from parallel protein expression, fully automated crystallization, and rapid data collection to highly efficient structure determination methods. A thoroughly designed automation technology platform supported by a powerful informatics infrastructure forms the basis for optimal workflow implementation and the data mining and analysis tools to generate new leads from experimental protein drug target structures.

  20. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  1. Habitat automation

    NASA Technical Reports Server (NTRS)

    Swab, Rodney E.

    1992-01-01

    A habitat, on either the surface of the Moon or Mars, will be designed and built with the proven technologies of that day. These technologies will be mature and readily available to the habitat designer. We believe an acceleration of the normal pace of automation would allow a habitat to be safer and more easily maintained than would be the case otherwise. This document examines the operation of a habitat and describes elements of that operation which may benefit from an increased use of automation. Research topics within the automation realm are then defined and discussed with respect to the role they can have in the design of the habitat. Problems associated with the integration of advanced technologies into real-world projects at NASA are also addressed.

  2. An Automated HIV-1 Env-Pseudotyped Virus Production for Global HIV Vaccine Trials

    PubMed Central

    Fuss, Martina; Mazzotta, Angela S.; Sarzotti-Kelsoe, Marcella; Ozaki, Daniel A.; Montefiori, David C.; von Briesen, Hagen; Zimmermann, Heiko; Meyerhans, Andreas

    2012-01-01

    Background Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed. Methodology/Principal Findings The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays) confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP) guidelines, including the validation parameters accuracy, precision, robustness and specificity. Conclusions An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell culture supernatant per

  3. Automated dispenser

    SciTech Connect

    Hollen, R.M.; Stalnaker, N.D.

    1989-04-06

    An automated dispenser having a conventional pipette attached to an actuating cylinder through a flexible cable for delivering precise quantities of a liquid through commands from remotely located computer software. The travel of the flexible cable is controlled by adjustable stops and a locking shaft. The pipette can be positioned manually or by the hands of a robot. 1 fig.

  4. Automating Finance

    ERIC Educational Resources Information Center

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  5. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  6. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  7. Parallel reactor systems for bioprocess development.

    PubMed

    Weuster-Botz, Dirk

    2005-01-01

    Controlled parallel bioreactor systems allow fed-batch operation at early stages of process development. The characteristics of shaken bioreactors operated in parallel (shake flask, microtiter plate), sparged bioreactors (small-scale bubble column) and stirred bioreactors (stirred-tank, stirred column) are briefly summarized. Parallel fed-batch operation is achieved with an intermittent feeding and pH-control system for up to 16 bioreactors operated in parallel on a scale of 100 ml. Examples of the scale-up and scale-down of pH-controlled microbial fed-batch processes demonstrate that controlled parallel reactor systems can result in more effective bioprocess development. Future developments are also outlined, including units of 48 parallel stirred-tank reactors with individual pH- and pO2-controls and automation as well as liquid handling system, operated on a scale of ml.

  8. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  9. Automated lithocell

    NASA Astrophysics Data System (ADS)

    Englisch, Andreas; Deuter, Armin

    1990-06-01

    Integration and automation have gained more and more ground in modern IC-manufacturing. It is difficult to make a direct calculation of the profit these investments yield. On the other hand, the demands to man, machine and technology have increased enormously of late; it is not difficult to see that only by means of integration and automation can these demands be coped with. Here are some salient points: U the complexity and costs incurred by the equipment and processes have got significantly higher . owing to the reduction of all dimensions, the tolerances within which the various process steps have to be carried out have got smaller and smaller and the adherence to these tolerances more and more difficult U the cycle time has become more and more important both for the development and control of new processes and, to a great extent, for a rapid and reliable supply to the customer. In order that the products be competitive under these conditions, all sort of costs have to be reduced and the yield has to be maximized. Therefore, the computer-aided control of the equipment and the process combined with an automatic data collection and a real-time SPC (statistical process control) has become absolutely necessary for successful IC-manufacturing. Human errors must be eliminated from the execution of the various process steps by automation. The work time set free in this way makes it possible for the human creativity to be employed on a larger scale in stabilizing the processes. Besides, a computer-aided equipment control can ensure the optimal utilization of the equipment round the clock.

  10. Automating the multiprocessing environment

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    An approach to automate the programming and operation of tree-structured networks of multiprocessor systems is discussed. A conceptual, knowledge-based operating environment is presented, and requirements for two major technology elements are identified as follows: (1) An intelligent information translator is proposed for implementating information transfer between dissimilar hardware and software, thereby enabling independent and modular development of future systems and promoting a language-independence of codes and information; (2) A resident system activity manager, which recognizes the systems capabilities and monitors the status of all systems within the environment, is proposed for integrating dissimilar systems into effective parallel processing resources to optimally meet user needs. Finally, key computational capabilities which must be provided before the environment can be realized are identified.

  11. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  12. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  13. The PARTY parallel runtime system

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Mirchandaney, Ravi; Smith, R. M.; Crowley, Kay; Nicol, D. M.

    1989-01-01

    In the present automated system for the organization of the data and computational operations entailed by parallel problems, in ways that optimize multiprocessor performance, general heuristics for partitioning program data and control are implemented by capturing and manipulating representations of a computation at run time. These heuristics are directed toward the dynamic identification and allocation of concurrent work in computations with irregular computational patterns. An optimized static-workload partitioning is computed for such repetitive-computation pattern problems as the iterative ones employed in scientific computation.

  14. Altered expression of triadin 95 causes parallel changes in localized Ca2+ release events and global Ca2+ signals in skeletal muscle cells in culture

    PubMed Central

    Fodor, János; Gönczi, Monika; Sztretye, Monika; Dienes, Beatrix; Oláh, Tamás; Szabó, László; Csoma, Eszter; Szentesi, Péter; Szigeti, Gyula P; Marty, Isabelle; Csernoch, László

    2008-01-01

    The 95 kDa triadin (Trisk 95), an integral protein of the sarcoplasmic reticular membrane in skeletal muscle, interacts with both the ryanodine receptor (RyR) and calsequestrin. While its role in the regulation of calcium homeostasis has been extensively studied, data are not available on whether the overexpression or the interference with the expression of Trisk 95 would affect calcium sparks the localized events of calcium release (LCRE). In the present study LCRE and calcium transients were studied using laser scanning confocal microscopy on C2C12 cells and on primary cultures of skeletal muscle. Liposome- or adenovirus-mediated Trisk 95 overexpression and shRNA interference with triadin translation were used to modify the level of the protein. Stable overexpression in C2C12 cells significantly decreased the amplitude and frequency of calcium sparks, and the frequency of embers. In line with these observations, depolarization-evoked calcium transients were also suppressed. Similarly, adenoviral transfection of Trisk 95 into cultured mouse skeletal muscle cells significantly decreased both the frequency and amplitude of spontaneous global calcium transients. Inhibition of endogenous triadin expression by RNA interference caused opposite effects. Primary cultures of rat skeletal muscle cells expressing endogenous Trisk 95 readily generated spontaneous calcium transients but rarely produced calcium sparks. Their transfection with specific shRNA sequence significantly reduced the triadin-specific immunoreactivity. Functional experiments on these cells revealed that while caffeine-evoked calcium transients were reduced, LCRE appeared with higher frequency. These results suggest that Trisk 95 negatively regulates RyR function by suppressing localized calcium release events and global calcium signals in cultured muscle cells. PMID:18845610

  15. Detection of Salmonella spp. with the BACTEC 9240 Automated Blood Culture System in 2008 - 2014 in Southern Iran (Shiraz): Biogrouping, MIC, and Antimicrobial Susceptibility Profiles of Isolates

    PubMed Central

    Anvarinejad, Mojtaba; Pouladfar, Gholam Reza; Pourabbas, Bahman; Amin Shahidi, Maneli; Rafaatpour, Noroddin; Dehyadegari, Mohammad Ali; Abbasi, Pejman; Mardaneh, Jalal

    2016-01-01

    Background Human salmonellosis continues to be a major international problem, in terms of both morbidity and economic losses. The antibiotic resistance of Salmonella is an increasing public health emergency, since infections from resistant bacteria are more difficult and costly to treat. Objectives The aims of the present study were to investigate the isolation of Salmonella spp. with the BACTEC automated system from blood samples during 2008 - 2014 in southern Iran (Shiraz). Detection of subspecies, biogrouping, and antimicrobial susceptibility testing by the disc diffusion and agar dilution methods were performed. Patients and Methods A total of 19 Salmonella spp. were consecutively isolated using BACTEC from blood samples of patients between 2008 and 2014 in Shiraz, Iran. The isolates were identified as Salmonella, based on biochemical tests embedded in the API-20E system. In order to characterize the biogroups and subspecies, biochemical testing was performed. Susceptibility testing (disc diffusion and agar dilution) and extended-spectrum β-lactamase (ESBL) detection were performed according to the clinical and laboratory standards institute (CLSI) guidelines. Results Of the total 19 Salmonella spp. isolates recovered by the BACTEC automated system, all belonged to the Salmonella enterica subsp. houtenae. Five isolates (26.5%) were resistant to azithromycin. Six (31.5%) isolates with the disc diffusion method and five (26.3%) with the agar dilution method displayed resistance to nalidixic acid (minimum inhibitory concentration [MIC] > 32 μg/mL). All nalidixic acid-resistant isolates were also ciprofloxacin-sensitive. All isolates were ESBL-negative. Twenty-one percent of isolates were found to be resistant to chloramphenicol (MIC ≥ 32 μg/mL), and 16% were resistant to ampicillin (MIC ≥ 32 μg/mL). Conclusions The results indicate that multidrug-resistant (MDR) strains of Salmonella are increasing in number, and fewer antibiotics may be useful for

  16. First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    NASA Technical Reports Server (NTRS)

    Griffin, Sandy (Editor)

    1987-01-01

    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.

  17. Comparison of automated BAX polymerase chain reaction and standard culture methods for detection of Listeria monocyogenes in blue crab meat (Callinectus sapidus) and blue crab processing plants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study compared the BAX Polymerase Chain Reaction method (BAX PCR) with the Standard Culture Method (SCM) for detection of L. monocytogenes in blue crab meat and crab processing plants. The aim of this study was to address this data gap. Raw crabs, finished products and environmental sponge samp...

  18. Rapid detection of Gram-negative bacteria and their drug resistance genes from positive blood cultures using an automated microarray assay.

    PubMed

    Han, Eunhee; Park, Dong-Jin; Kim, Yukyoung; Yu, Jin Kyung; Park, Kang Gyun; Park, Yeon-Joon

    2015-03-01

    We evaluated the performance of the Verigene Gram-negative blood culture (BC-GN) assay (CE-IVD version) for identification of Gram-negative (GN) bacteria and detection of resistance genes. A total of 163 GN organisms (72 characterized strains and 91 clinical isolates from 86 patients) were tested; among the clinical isolates, 86 (94.5%) isolates were included in the BC-GN panel. For identification, the agreement was 98.6% (146/148, 95% confidence interval [CI], 92.1-100) and 70% (7/10, 95% CI, 53.5-100) for monomicrobial and polymicrobial cultures, respectively. Of the 48 resistance genes harbored by 43 characterized strains, all were correctly detected. Of the 19 clinical isolates harboring resistance genes, 1 CTX-M-producing Escherichia coli isolated in polymicrobial culture was not detected. Overall, BC-GN assay provides acceptable accuracy for rapid identification of Gram-negative bacteria and detection of resistance genes, compared with routine laboratory methods despite that it has limitations in the number of genus/species and resistance gene included in the panel and it shows lower sensitivity in polymicrobial cultures.

  19. Contactless automated manipulation of mesoscale objects using opto-fluidic actuation and visual servoing.

    PubMed

    Vela, Emir; Hafez, Moustapha; Régnier, Stéphane

    2014-05-01

    This work describes an automated opto-fluidic system for parallel non-contact manipulation of microcomponents. The strong dynamics of laser-driven thermocapillary flows were used to drag microcomponents at high speeds. High-speed flows allowed to manipulate micro-objects in a parallel manner only using a single laser and a mirror scanner. An automated process was implemented using visual servoing with a high-speed camera in order to achieve accurately parallel manipulation. Automated manipulation of two glass beads of 30 up to 300 μm in diameter moving in parallel at speeds in the range of mm/s was demonstrated.

  20. Contactless automated manipulation of mesoscale objects using opto-fluidic actuation and visual servoing.

    PubMed

    Vela, Emir; Hafez, Moustapha; Régnier, Stéphane

    2014-05-01

    This work describes an automated opto-fluidic system for parallel non-contact manipulation of microcomponents. The strong dynamics of laser-driven thermocapillary flows were used to drag microcomponents at high speeds. High-speed flows allowed to manipulate micro-objects in a parallel manner only using a single laser and a mirror scanner. An automated process was implemented using visual servoing with a high-speed camera in order to achieve accurately parallel manipulation. Automated manipulation of two glass beads of 30 up to 300 μm in diameter moving in parallel at speeds in the range of mm/s was demonstrated. PMID:24880415

  1. Automated Extraction Improves Multiplex Molecular Detection of Infection in Septic Patients

    PubMed Central

    Regueiro, Benito J.; Varela-Ledo, Eduardo; Martinez-Lamas, Lucia; Rodriguez-Calviño, Javier; Aguilera, Antonio; Santos, Antonio; Gomez-Tato, Antonio; Alvarez-Escudero, Julian

    2010-01-01

    Sepsis is one of the leading causes of morbidity and mortality in hospitalized patients worldwide. Molecular technologies for rapid detection of microorganisms in patients with sepsis have only recently become available. LightCycler SeptiFast test Mgrade (Roche Diagnostics GmbH) is a multiplex PCR analysis able to detect DNA of the 25 most frequent pathogens in bloodstream infections. The time and labor saved while avoiding excessive laboratory manipulation is the rationale for selecting the automated MagNA Pure compact nucleic acid isolation kit-I (Roche Applied Science, GmbH) as an alternative to conventional SeptiFast extraction. For the purposes of this study, we evaluate extraction in order to demonstrate the feasibility of automation. Finally, a prospective observational study was done using 106 clinical samples obtained from 76 patients in our ICU. Both extraction methods were used in parallel to test the samples. When molecular detection test results using both manual and automated extraction were compared with the data from blood cultures obtained at the same time, the results show that SeptiFast with the alternative MagNA Pure compact extraction not only shortens the complete workflow to 3.57 hrs., but also increases sensitivity of the molecular assay for detecting infection as defined by positive blood culture confirmation. PMID:20967222

  2. Automation tools for flexible aircraft maintenance.

    SciTech Connect

    Prentice, William J.; Drotning, William D.; Watterberg, Peter A.; Loucks, Clifford S.; Kozlowski, David M.

    2003-11-01

    This report summarizes the accomplishments of the Laboratory Directed Research and Development (LDRD) project 26546 at Sandia, during the period FY01 through FY03. The project team visited four DoD depots that support extensive aircraft maintenance in order to understand critical needs for automation, and to identify maintenance processes for potential automation or integration opportunities. From the visits, the team identified technology needs and application issues, as well as non-technical drivers that influence the application of automation in depot maintenance of aircraft. Software tools for automation facility design analysis were developed, improved, extended, and integrated to encompass greater breadth for eventual application as a generalized design tool. The design tools for automated path planning and path generation have been enhanced to incorporate those complex robot systems with redundant joint configurations, which are likely candidate designs for a complex aircraft maintenance facility. A prototype force-controlled actively compliant end-effector was designed and developed based on a parallel kinematic mechanism design. This device was developed for demonstration of surface finishing, one of many in-contact operations performed during aircraft maintenance. This end-effector tool was positioned along the workpiece by a robot manipulator, programmed for operation by the automated planning tools integrated for this project. Together, the hardware and software tools demonstrate many of the technologies required for flexible automation in a maintenance facility.

  3. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  4. Compact, Automated, Frequency-Agile Microspectrofluorimeter

    NASA Technical Reports Server (NTRS)

    Fernandez, Salvador M.; Guignon, Ernest F.

    1995-01-01

    Compact, reliable, rugged, automated cell-culture and frequency-agile microspectrofluorimetric apparatus developed to perform experiments involving photometric imaging observations of single live cells. In original application, apparatus operates mostly unattended aboard spacecraft; potential terrestrial applications include automated or semiautomated diagnosis of pathological tissues in clinical laboratories, biomedical instrumentation, monitoring of biological process streams, and portable instrumentation for testing biological conditions in various environments. Offers obvious advantages over present laboratory instrumentation.

  5. Automated Coal-Mining System

    NASA Technical Reports Server (NTRS)

    Gangal, M. D.; Isenberg, L.; Lewis, E. V.

    1985-01-01

    Proposed system offers safety and large return on investment. System, operating by year 2000, employs machines and processes based on proven principles. According to concept, line of parallel machines, connected in groups of four to service modules, attacks face of coal seam. High-pressure water jets and central auger on each machine break face. Jaws scoop up coal chunks, and auger grinds them and forces fragments into slurry-transport system. Slurry pumped through pipeline to point of use. Concept for highly automated coal-mining system increases productivity, makes mining safer, and protects health of mine workers.

  6. Laboratory automation: trajectory, technology, and tactics.

    PubMed

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  7. Operations automation

    NASA Technical Reports Server (NTRS)

    Boreham, Charles Thomas

    1994-01-01

    This is truly the era of 'faster-better-cheaper' at the National Aeronautics and Space Administration/Jet Propulsion Laboratory (NASA/JPL). To continue JPL's primary mission of building and operating interplanetary spacecraft, all possible avenues are being explored in the search for better value for each dollar spent. A significant cost factor in any mission is the amount of manpower required to receive, decode, decommutate, and distribute spacecraft engineering and experiment data. The replacement of the many mission-unique data systems with the single Advanced Multimission Operations System (AMMOS) has already allowed for some manpower reduction. Now, we find that further economies are made possible by drastically reducing the number of human interventions required to perform the setup, data saving, station handover, processed data loading, and tear down activities that are associated with each spacecraft tracking pass. We have recently adapted three public domain tools to the AMMOS system which allow common elements to be scheduled and initialized without the normal human intervention. This is accomplished with a stored weekly event schedule. The manual entries and specialized scripts which had to be provided just prior to and during a pass are now triggered by the schedule to perform the functions unique to the upcoming pass. This combination of public domain software and the AMMOS system has been run in parallel with the flight operation in an online testing phase for six months. With this methodology, a savings of 11 man-years per year is projected with no increase in data loss or project risk. There are even greater savings to be gained as we learn other uses for this configuration.

  8. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  9. Automated External Defibrillator

    MedlinePlus

    ... from the NHLBI on Twitter. What Is an Automated External Defibrillator? An automated external defibrillator (AED) is a portable device that ... Institutes of Health Department of Health and Human Services USA.gov

  10. Automation: triumph or trap?

    PubMed

    Smythe, M H

    1997-01-01

    Automation, a hot topic in the laboratory world today, can be a very expensive option. Those who are considering implementing automation can save time and money by examining the issues from the standpoint of an industrial/manufacturing engineer. The engineer not only asks what problems will be solved by automation, but what problems will be created. This article discusses questions that must be asked and answered to ensure that automation efforts will yield real and substantial payoffs.

  11. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  12. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  13. Shoe-String Automation

    SciTech Connect

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  14. Automation of industrial bioprocesses.

    PubMed

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems.

  15. Automation in Clinical Microbiology

    PubMed Central

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  16. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  17. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  18. Automated video surveillance: teaching an old dog new tricks

    NASA Astrophysics Data System (ADS)

    McLeod, Alastair

    1993-12-01

    The automated video surveillance market is booming with new players, new systems, new hardware and software, and an extended range of applications. This paper reviews available technology, and describes the features required for a good automated surveillance system. Both hardware and software are discussed. An overview of typical applications is also given. A shift towards PC-based hybrid systems, use of parallel processing, neural networks, and exploitation of modern telecomms are introduced, highlighting the evolution modern video surveillance systems.

  19. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  20. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  1. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  2. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  3. Automated DNA Sequencing System

    SciTech Connect

    Armstrong, G.A.; Ekkebus, C.P.; Hauser, L.J.; Kress, R.L.; Mural, R.J.

    1999-04-25

    Oak Ridge National Laboratory (ORNL) is developing a core DNA sequencing facility to support biological research endeavors at ORNL and to conduct basic sequencing automation research. This facility is novel because its development is based on existing standard biology laboratory equipment; thus, the development process is of interest to the many small laboratories trying to use automation to control costs and increase throughput. Before automation, biology Laboratory personnel purified DNA, completed cycle sequencing, and prepared 96-well sample plates with commercially available hardware designed specifically for each step in the process. Following purification and thermal cycling, an automated sequencing machine was used for the sequencing. A technician handled all movement of the 96-well sample plates between machines. To automate the process, ORNL is adding a CRS Robotics A- 465 arm, ABI 377 sequencing machine, automated centrifuge, automated refrigerator, and possibly an automated SpeedVac. The entire system will be integrated with one central controller that will direct each machine and the robot. The goal of this system is to completely automate the sequencing procedure from bacterial cell samples through ready-to-be-sequenced DNA and ultimately to completed sequence. The system will be flexible and will accommodate different chemistries than existing automated sequencing lines. The system will be expanded in the future to include colony picking and/or actual sequencing. This discrete event, DNA sequencing system will demonstrate that smaller sequencing labs can achieve cost-effective the laboratory grow.

  4. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  5. 21 CFR 866.2440 - Automated medium dispensing and stacking device.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated medium dispensing and stacking device... Automated medium dispensing and stacking device. (a) Identification. An automated medium dispensing and stacking device is a device intended for medical purposes to dispense a microbiological culture medium...

  6. 21 CFR 866.2440 - Automated medium dispensing and stacking device.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated medium dispensing and stacking device... Automated medium dispensing and stacking device. (a) Identification. An automated medium dispensing and stacking device is a device intended for medical purposes to dispense a microbiological culture medium...

  7. 21 CFR 866.2440 - Automated medium dispensing and stacking device.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated medium dispensing and stacking device... Automated medium dispensing and stacking device. (a) Identification. An automated medium dispensing and stacking device is a device intended for medical purposes to dispense a microbiological culture medium...

  8. 21 CFR 866.2440 - Automated medium dispensing and stacking device.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated medium dispensing and stacking device... Automated medium dispensing and stacking device. (a) Identification. An automated medium dispensing and stacking device is a device intended for medical purposes to dispense a microbiological culture medium...

  9. 21 CFR 866.2440 - Automated medium dispensing and stacking device.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated medium dispensing and stacking device... Automated medium dispensing and stacking device. (a) Identification. An automated medium dispensing and stacking device is a device intended for medical purposes to dispense a microbiological culture medium...

  10. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  11. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  12. Laboratory Automation and Middleware.

    PubMed

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory.

  13. Hybrid Programmable Logic Controller for Load Automation

    NASA Astrophysics Data System (ADS)

    Shahzad, Aamir; Farooq, Hashim; Abbar, Sofia; Yousaf, Mushtaq; Hafeez, Kamran; Hanif, M.

    The purpose of this study is to design a Programmable Logic Controller (PLC) to command 8-relays to control and automate ac loads via PC parallel port. In this project, the PLC is connected to the Personal Computer called hybrid PLC and this PC controls all the field ac loads via parallel printer port. Eight signals of different sequences are sent on parallel port via computer keyboard, which activate the microcontroller as inputs. Microcontroller responds according to these inputs and its user programming, which then commands 8-relays to control (on/off) different electronic appliances. Microcontroller memory makes easier to store its programming permanently. This hybrid PLC is applicable for controlling and monitoring industrial processes particularly of small to medium scale manufacturing processes and may be used for home automation as well. Parallel port is accessed by a program written in C++ language and microcontroller is programmed in assembly language. Ac load of any kind, whether resistive or inductive can be controlled with the help of this project.

  14. Automated Fresnel lens tester system

    SciTech Connect

    Phipps, G.S.

    1981-07-01

    An automated data collection system controlled by a desktop computer has been developed for testing Fresnel concentrators (lenses) intended for solar energy applications. The system maps the two-dimensional irradiance pattern (image) formed in a plane parallel to the lens, whereas the lens and detector assembly track the sun. A point detector silicon diode (0.5-mm-dia active area) measures the irradiance at each point of an operator-defined rectilinear grid of data positions. Comparison with a second detector measuring solar insolation levels results in solar concentration ratios over the image plane. Summation of image plane energies allows calculation of lens efficiencies for various solar cell sizes. Various graphical plots of concentration ratio data help to visualize energy distribution patterns.

  15. An 8-fold parallel reactor system for combinatorial catalysis research.

    PubMed

    Stoll, Norbert; Allwardt, Arne; Dingerdissen, Uwe; Thurow, Kerstin

    2006-01-01

    Increasing economic globalization and mounting time and cost pressure on the development of new raw materials for the chemical industry as well as materials and environmental engineering constantly raise the demands on technologies to be used. Parallelization, miniaturization, and automation are the main concepts involved in increasing the rate of chemical and biological experimentation.

  16. Parallel Processing Creates a Low-Cost Growth Path.

    ERIC Educational Resources Information Center

    Shekhel, Alex; Freeman, Eva

    1987-01-01

    Discusses the advantages of parallel processor computers in terms of expandibility, cost, performance and reliability, and suggests that such computers be used in library automation systems as a cost effective approach to planning for the growth of information services and computer applications. (CLB)

  17. An 8-Fold Parallel Reactor System for Combinatorial Catalysis Research

    PubMed Central

    Stoll, Norbert; Allwardt, Arne; Dingerdissen, Uwe

    2006-01-01

    Increasing economic globalization and mounting time and cost pressure on the development of new raw materials for the chemical industry as well as materials and environmental engineering constantly raise the demands on technologies to be used. Parallelization, miniaturization, and automation are the main concepts involved in increasing the rate of chemical and biological experimentation. PMID:17671621

  18. New methods in combinatorial chemistry-robotics and parallel synthesis.

    PubMed

    Cargill, J F; Lebl, M

    1997-06-01

    Technological advances in the automation of parallel synthesis are following the model set by the semiconductor industry: miniaturization, increasing speed, lower costs. Recent work includes preparation of high-density reaction blocks, development of ink-jet dispensing to polypropylene sheets and synthesis inside customized microchips.

  19. Automation in the clinical microbiology laboratory.

    PubMed

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future. PMID:23931839

  20. Automation in the clinical microbiology laboratory.

    PubMed

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future.

  1. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  2. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  3. CS-Studio Scan System Parallelization

    SciTech Connect

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  4. Automating checks of plan check automation.

    PubMed

    Halabi, Tarek; Lu, Hsiao-Ming

    2014-07-08

    While a few physicists have designed new plan check automation solutions for their clinics, fewer, if any, managed to adapt existing solutions. As complex and varied as the systems they check, these programs must gain the full confidence of those who would run them on countless patient plans. The present automation effort, planCheck, therefore focuses on versatility and ease of implementation and verification. To demonstrate this, we apply planCheck to proton gantry, stereotactic proton gantry, stereotactic proton fixed beam (STAR), and IMRT treatments.

  5. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  6. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  7. Work and Programmable Automation.

    ERIC Educational Resources Information Center

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  8. Automation and Cataloging.

    ERIC Educational Resources Information Center

    Furuta, Kenneth; And Others

    1990-01-01

    These three articles address issues in library cataloging that are affected by automation: (1) the impact of automation and bibliographic utilities on professional catalogers; (2) the effect of the LASS microcomputer software on the cost of authority work in cataloging at the University of Arizona; and (3) online subject heading and classification…

  9. Library Automation Style Guide.

    ERIC Educational Resources Information Center

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  10. More Benefits of Automation.

    ERIC Educational Resources Information Center

    Getz, Malcolm

    1988-01-01

    Describes a study that measured the benefits of an automated catalog and automated circulation system from the library user's point of view in terms of the value of time saved. Topics discussed include patterns of use, access time, availability of information, search behaviors, and the effectiveness of the measures used. (seven references)…

  11. Educating Archivists for Automation.

    ERIC Educational Resources Information Center

    Weber, Lisa B.

    1988-01-01

    Archivists indicate they want to learn more about automation in archives, the MARC AMC (Archival and Manuscripts Control) format, and emerging computer technologies; they look for educational opportunities through professional associations, publications, and college coursework; future archival automation education needs include standards, shared…

  12. Automation and robotics

    NASA Technical Reports Server (NTRS)

    Montemerlo, Melvin

    1988-01-01

    The Autonomous Systems focus on the automation of control systems for the Space Station and mission operations. Telerobotics focuses on automation for in-space servicing, assembly, and repair. The Autonomous Systems and Telerobotics each have a planned sequence of integrated demonstrations showing the evolutionary advance of the state-of-the-art. Progress is briefly described for each area of concern.

  13. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  14. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things. PMID:27534347

  15. Advances in inspection automation

    NASA Astrophysics Data System (ADS)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  16. Automation in Immunohematology

    PubMed Central

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-01-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process. PMID:22988378

  17. Automation in immunohematology.

    PubMed

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-07-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process. PMID:22988378

  18. Automation in immunohematology.

    PubMed

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-07-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  19. Automated manufacturing of chimeric antigen receptor T cells for adoptive immunotherapy using CliniMACS prodigy.

    PubMed

    Mock, Ulrike; Nickolay, Lauren; Philip, Brian; Cheung, Gordon Weng-Kit; Zhan, Hong; Johnston, Ian C D; Kaiser, Andrew D; Peggs, Karl; Pule, Martin; Thrasher, Adrian J; Qasim, Waseem

    2016-08-01

    Novel cell therapies derived from human T lymphocytes are exhibiting enormous potential in early-phase clinical trials in patients with hematologic malignancies. Ex vivo modification of T cells is currently limited to a small number of centers with the required infrastructure and expertise. The process requires isolation, activation, transduction, expansion and cryopreservation steps. To simplify procedures and widen applicability for clinical therapies, automation of these procedures is being developed. The CliniMACS Prodigy (Miltenyi Biotec) has recently been adapted for lentiviral transduction of T cells and here we analyse the feasibility of a clinically compliant T-cell engineering process for the manufacture of T cells encoding chimeric antigen receptors (CAR) for CD19 (CAR19), a widely targeted antigen in B-cell malignancies. Using a closed, single-use tubing set we processed mononuclear cells from fresh or frozen leukapheresis harvests collected from healthy volunteer donors. Cells were phenotyped and subjected to automated processing and activation using TransAct, a polymeric nanomatrix activation reagent incorporating CD3/CD28-specific antibodies. Cells were then transduced and expanded in the CentriCult-Unit of the tubing set, under stabilized culture conditions with automated feeding and media exchange. The process was continuously monitored to determine kinetics of expansion, transduction efficiency and phenotype of the engineered cells in comparison with small-scale transductions run in parallel. We found that transduction efficiencies, phenotype and function of CAR19 T cells were comparable with existing procedures and overall T-cell yields sufficient for anticipated therapeutic dosing. The automation of closed-system T-cell engineering should improve dissemination of emerging immunotherapies and greatly widen applicability. PMID:27378344

  20. Automated manufacturing of chimeric antigen receptor T cells for adoptive immunotherapy using CliniMACS prodigy.

    PubMed

    Mock, Ulrike; Nickolay, Lauren; Philip, Brian; Cheung, Gordon Weng-Kit; Zhan, Hong; Johnston, Ian C D; Kaiser, Andrew D; Peggs, Karl; Pule, Martin; Thrasher, Adrian J; Qasim, Waseem

    2016-08-01

    Novel cell therapies derived from human T lymphocytes are exhibiting enormous potential in early-phase clinical trials in patients with hematologic malignancies. Ex vivo modification of T cells is currently limited to a small number of centers with the required infrastructure and expertise. The process requires isolation, activation, transduction, expansion and cryopreservation steps. To simplify procedures and widen applicability for clinical therapies, automation of these procedures is being developed. The CliniMACS Prodigy (Miltenyi Biotec) has recently been adapted for lentiviral transduction of T cells and here we analyse the feasibility of a clinically compliant T-cell engineering process for the manufacture of T cells encoding chimeric antigen receptors (CAR) for CD19 (CAR19), a widely targeted antigen in B-cell malignancies. Using a closed, single-use tubing set we processed mononuclear cells from fresh or frozen leukapheresis harvests collected from healthy volunteer donors. Cells were phenotyped and subjected to automated processing and activation using TransAct, a polymeric nanomatrix activation reagent incorporating CD3/CD28-specific antibodies. Cells were then transduced and expanded in the CentriCult-Unit of the tubing set, under stabilized culture conditions with automated feeding and media exchange. The process was continuously monitored to determine kinetics of expansion, transduction efficiency and phenotype of the engineered cells in comparison with small-scale transductions run in parallel. We found that transduction efficiencies, phenotype and function of CAR19 T cells were comparable with existing procedures and overall T-cell yields sufficient for anticipated therapeutic dosing. The automation of closed-system T-cell engineering should improve dissemination of emerging immunotherapies and greatly widen applicability.

  1. Eclipse Parallel Tools Platform

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  2. Automation of Hubble Space Telescope Mission Operations

    NASA Technical Reports Server (NTRS)

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry

    2012-01-01

    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  3. A digital microfluidic platform for primary cell culture and analysis.

    PubMed

    Srigunapalan, Suthan; Eydelnant, Irwin A; Simmons, Craig A; Wheeler, Aaron R

    2012-01-21

    Digital microfluidics (DMF) is a technology that facilitates electrostatic manipulation of discrete nano- and micro-litre droplets across an array of electrodes, which provides the advantages of single sample addressability, automation, and parallelization. There has been considerable interest in recent years in using DMF for cell culture and analysis, but previous studies have used immortalized cell lines. We report here the first digital microfluidic method for primary cell culture and analysis. A new mode of "upside-down" cell culture was implemented by patterning the top plate of a device using a fluorocarbon liftoff technique. This method was useful for culturing three different primary cell types for up to one week, as well as implementing a fixation, permeabilization, and staining procedure for F-actin and nuclei. A multistep assay for monocyte adhesion to endothelial cells (ECs) was performed to evaluate functionality in DMF-cultured primary cells and to demonstrate co-culture using a DMF platform. Monocytes were observed to adhere in significantly greater numbers to ECs exposed to tumor necrosis factor (TNF)-α than those that were not, confirming that ECs cultured in this format maintain in vivo-like properties. The ability to manipulate, maintain, and assay primary cells demonstrates a useful application for DMF in studies involving precious samples of cells from small animals or human patients.

  4. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  5. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  6. Automated planar patch-clamp.

    PubMed

    Milligan, Carol J; Möller, Clemens

    2013-01-01

    Ion channels are integral membrane proteins that regulate the flow of ions across the plasma membrane and the membranes of intracellular organelles of both excitable and non-excitable cells. Ion channels are vital to a wide variety of biological processes and are prominent components of the nervous system and cardiovascular system, as well as controlling many metabolic functions. Furthermore, ion channels are known to be involved in many disease states and as such have become popular therapeutic targets. For many years now manual patch-clamping has been regarded as one of the best approaches for assaying ion channel function, through direct measurement of ion flow across these membrane proteins. Over the last decade there have been many remarkable breakthroughs in the development of technologies enabling the study of ion channels. One of these breakthroughs is the development of automated planar patch-clamp technology. Automated platforms have demonstrated the ability to generate high-quality data with high throughput capabilities, at great efficiency and reliability. Additional features such as simultaneous intracellular and extracellular perfusion of the cell membrane, current clamp operation, fast compound application, an increasing rate of parallelization, and more recently temperature control have been introduced. Furthermore, in addition to the well-established studies of over-expressed ion channel proteins in cell lines, new generations of planar patch-clamp systems have enabled successful studies of native and primary mammalian cells. This technology is becoming increasingly popular and extensively used both within areas of drug discovery as well as academic research. Many platforms have been developed including NPC-16 Patchliner(®) and SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich), CytoPatch™ (Cytocentrics AG, Rostock), PatchXpress(®) 7000A, IonWorks(®) Quattro and IonWorks Barracuda™, (Molecular Devices, LLC); Dynaflow(®) HT (Cellectricon

  7. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  9. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  10. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  11. Automation synthesis modules review.

    PubMed

    Boschi, S; Lodi, F; Malizia, C; Cicoria, G; Marengo, M

    2013-06-01

    The introduction of (68)Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived (68)Ge/(68)Ga generator has been at the bases of the development of (68)Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for (68)Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations.

  12. A centralized global automation group in a decentralized organization.

    PubMed

    Ormand, J; Bruner, J; Birkemo, L; Hinderliter-Smith, J; Veitch, J

    2000-01-01

    In the latter part of the 1990s, many companies have worked to foster a 'matrix' style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized resource providing development, support, integration, and training in laboratory automation across businesses in the Development organization. The matrix culture still presents challenges with respect to communication and managing the development of technology. A current challenge for our team is to go beyond our recognized role as a technology resource and actually to influence automation strategies across the global Development organization. We shall provide an overview of our role as a centralized resource, our team strategy, examples of current and past successes and failures, and future directions.

  13. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  14. Automating Shallow 3D Seismic Imaging

    SciTech Connect

    Steeples, Don; Tsoflias, George

    2009-01-15

    Our efforts since 1997 have been directed toward developing ultra-shallow seismic imaging as a cost-effective method applicable to DOE facilities. This report covers the final year of grant-funded research to refine 3D shallow seismic imaging, which built on a previous 7-year grant (FG07-97ER14826) that refined and demonstrated the use of an automated method of conducting shallow seismic surveys; this represents a significant departure from conventional seismic-survey field procedures. The primary objective of this final project was to develop an automated three-dimensional (3D) shallow-seismic reflection imaging capability. This is a natural progression from our previous published work and is conceptually parallel to the innovative imaging methods used in the petroleum industry.

  15. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  16. Automated Lattice Perturbation Theory

    SciTech Connect

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  17. Automated Pilot Advisory System

    NASA Technical Reports Server (NTRS)

    Parks, J. L., Jr.; Haidt, J. G.

    1981-01-01

    An Automated Pilot Advisory System (APAS) was developed and operationally tested to demonstrate the concept that low cost automated systems can provide air traffic and aviation weather advisory information at high density uncontrolled airports. The system was designed to enhance the see and be seen rule of flight, and pilots who used the system preferred it over the self announcement system presently used at uncontrolled airports.

  18. Automated Status Notification System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA Lewis Research Center's Automated Status Notification System (ASNS) was born out of need. To prevent "hacker attacks," Lewis' telephone system needed to monitor communications activities 24 hr a day, 7 days a week. With decreasing staff resources, this continuous monitoring had to be automated. By utilizing existing communications hardware, a UNIX workstation, and NAWK (a pattern scanning and processing language), we implemented a continuous monitoring system.

  19. Automated Groundwater Screening

    SciTech Connect

    Taylor, Glenn A.; Collard, Leonard, B.

    2005-10-31

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application.

  20. Automated imagery orthorectification pilot

    NASA Astrophysics Data System (ADS)

    Slonecker, E. Terrence; Johnson, Brad; McMahon, Joe

    2009-10-01

    Automated orthorectification of raw image products is now possible based on the comprehensive metadata collected by Global Positioning Systems and Inertial Measurement Unit technology aboard aircraft and satellite digital imaging systems, and based on emerging pattern-matching and automated image-to-image and control point selection capabilities in many advanced image processing systems. Automated orthorectification of standard aerial photography is also possible if a camera calibration report and sufficient metadata is available. Orthorectification of historical imagery, for which only limited metadata was available, was also attempted and found to require some user input, creating a semi-automated process that still has significant potential to reduce processing time and expense for the conversion of archival historical imagery into geospatially enabled, digital formats, facilitating preservation and utilization of a vast archive of historical imagery. Over 90 percent of the frames of historical aerial photos used in this experiment were successfully orthorectified to the accuracy of the USGS 100K base map series utilized for the geospatial reference of the archive. The accuracy standard for the 100K series maps is approximately 167 feet (51 meters). The main problems associated with orthorectification failure were cloud cover, shadow and historical landscape change which confused automated image-to-image matching processes. Further research is recommended to optimize automated orthorectification methods and enable broad operational use, especially as related to historical imagery archives.

  1. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  2. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  3. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  4. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  5. Artificial intelligence in parallel

    SciTech Connect

    Waldrop, M.M.

    1984-08-10

    The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

  6. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  7. Automated Generation of Message-Passing Programs: An Evaluation of CAPTools using NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Hao-Qiang; Yan, Jerry C.; Bailey, David (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During the same time period, a steady transition of hardware and system software also occurred, forcing us to expand great efforts into migrating and receding our applications. As applications and machine architectures continue to become increasingly complex, the cost and time required for this process will become prohibitive. Various attempts to exploit software tools to assist and automate the parallelization process have not produced favorable results. In this paper, we evaluate an interactive parallelization tool, CAPTools, for parallelizing serial versions of the NAB Parallel Benchmarks. Finally, we compare the performance of the resulting CAPTools generated code to the hand-coded benchmarks on the Origin 2000 and IBM SP2. Based on these results, a discussion on the feasibility of automated parallelization of aerospace applications is presented along with suggestions for future work.

  8. Automated macromolecular crystal detection system and method

    DOEpatents

    Christian, Allen T.; Segelke, Brent; Rupp, Bernard; Toppani, Dominique

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  9. Automated inspection of hot steel slabs

    DOEpatents

    Martin, Ronald J.

    1985-01-01

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.

  10. Automated inspection of hot steel slabs

    DOEpatents

    Martin, R.J.

    1985-12-24

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.

  11. Evaluation of an automated agar plate streaker.

    PubMed Central

    Tilton, R C; Ryan, R W

    1978-01-01

    An automated agar plate streaker was evaluated. The Autostreaker mechanizes the agar plate streaking process by providing storage for plates, labeling and streaking one or more plates for either isolation or quantitation, and stacking in one of several racks for subsequent incubation. Results showed the Autostreaker to produce agar plates with well-separated colonies and accurate colony counts. A total of 1,930 clinical specimens were processed either in parallel with manual methods or solely by the Autostreaker. Technologist acceptance of machine-streaked plates was outstanding. Images PMID:348722

  12. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  13. An automated method for DNA preparation from thousands of YAC clones.

    PubMed Central

    MacMurray, A J; Weaver, A; Shin, H S; Lander, E S

    1991-01-01

    We describe an automated method for the preparation of yeast genomic DNA capable of preparing thousands of DNAs in parallel from a YAC library. Briefly, the protocol involves four steps: (1) Yeast clones are grown in the wells of 96-well microtiter plates with filter (rather than plastic) well-bottoms, which are embedded in solid growth media; (2) These yeast cultures are resuspended and their concentrations determined by optical density measurement; (3) Equal numbers of cells from each well are embedded in low-melting temperature agarose blocks in fresh 96-well plates, again with filter bottoms; and (4) DNA is prepared in the agarose blocks by a protocol similar to that used for preparing DNA for pulsed-field gels, with the reagents being dialyzed through the (filter) bottoms of the microtiter plate. The DNA produced by this method is suitable for pulsed-field gel electrophoresis, for restriction enzyme digestion, and for the polymerase chain reaction (PCR). Using this protocol, we produced 3000 YAC strain DNAs in three weeks. This automated procedure should be extremely useful in many genomic mapping projects. Images PMID:2014175

  14. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  15. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  16. Parallel time integration software

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  17. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  18. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  19. Automated telescope scheduling

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.

    1988-01-01

    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  20. Materials Testing and Automation

    NASA Astrophysics Data System (ADS)

    Cooper, Wayne D.; Zweigoron, Ronald B.

    1980-07-01

    The advent of automation in materials testing has been in large part responsible for recent radical changes in the materials testing field: Tests virtually impossible to perform without a computer have become more straightforward to conduct. In addition, standardized tests may be performed with enhanced efficiency and repeatability. A typical automated system is described in terms of its primary subsystems — an analog station, a digital computer, and a processor interface. The processor interface links the analog functions with the digital computer; it includes data acquisition, command function generation, and test control functions. Features of automated testing are described with emphasis on calculated variable control, control of a variable that is computed by the processor and cannot be read directly from a transducer. Three calculated variable tests are described: a yield surface probe test, a thermomechanical fatigue test, and a constant-stress-intensity range crack-growth test. Future developments are discussed.

  1. Automated Factor Slice Sampling

    PubMed Central

    Tibbits, Matthew M.; Groendyke, Chris; Haran, Murali; Liechty, John C.

    2013-01-01

    Markov chain Monte Carlo (MCMC) algorithms offer a very general approach for sampling from arbitrary distributions. However, designing and tuning MCMC algorithms for each new distribution, can be challenging and time consuming. It is particularly difficult to create an efficient sampler when there is strong dependence among the variables in a multivariate distribution. We describe a two-pronged approach for constructing efficient, automated MCMC algorithms: (1) we propose the “factor slice sampler”, a generalization of the univariate slice sampler where we treat the selection of a coordinate basis (factors) as an additional tuning parameter, and (2) we develop an approach for automatically selecting tuning parameters in order to construct an efficient factor slice sampler. In addition to automating the factor slice sampler, our tuning approach also applies to the standard univariate slice samplers. We demonstrate the efficiency and general applicability of our automated MCMC algorithm with a number of illustrative examples. PMID:24955002

  2. Automation in medicinal chemistry.

    PubMed

    Reader, John C

    2004-01-01

    The implementation of appropriate automation can make a significant improvement in productivity at each stage of the drug discovery process, if it is incorporated into an efficient overall process. Automated chemistry has evolved rapidly from the 'combinatorial' techniques implemented in many industrial laboratories in the early 1990's which focused primarily on the hit discovery phase, and were highly dependent on solid-phase techniques and instrumentation derived from peptide synthesis. Automated tools and strategies have been developed which can impact the hit discovery, hit expansion and lead optimization phases, not only in synthesis, but also in reaction optimization, work-up, and purification of compounds. This article discusses the implementation of some of these techniques, based especially on experiences at Millennium Pharmaceuticals Research and Development Ltd.

  3. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  4. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Imamura, M. S.; Moser, R. L.; Veatch, M.

    1983-01-01

    Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.

  5. Automated fiber pigtailing technology

    NASA Astrophysics Data System (ADS)

    Strand, O. T.; Lowry, M. E.; Lu, S. Y.; Nelson, D. C.; Nikkel, D. J.; Pocha, M. D.; Young, K. D.

    1994-02-01

    The high cost of optoelectronic (OE) devices is due mainly to the labor-intensive packaging process. Manually pigtailing such devices as single-mode laser diodes and modulators is very time consuming with poor quality control. The Photonics Program and the Engineering Research Division at LLNL are addressing several issues associated with automatically packaging OE devices. A furry automated system must include high-precision fiber alignment, fiber attachment techniques, in-situ quality control, and parts handling and feeding. This paper will present on-going work at LLNL in the areas of automated fiber alignment and fiber attachment. For the fiber alignment, we are building an automated fiber pigtailing machine (AFPM) which combines computer vision and object recognition algorithms with active feedback to perform sub-micron alignments of single-mode fibers to modulators and laser diodes. We expect to perform sub-micron alignments in less than five minutes with this technology. For fiber attachment, we are building various geometries of silicon microbenches which include on-board heaters to solder metal-coated fibers and other components in place; these designs are completely compatible with an automated process of OE packaging. We have manually attached a laser diode, a thermistor, and a thermo-electric heater to one of our microbenches in less than 15 minutes using the on-board heaters for solder reflow; an automated process could perform this same exercise in only a few minutes. Automated packaging techniques such as these will help lower the costs of OE devices.

  6. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  7. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  8. SPINning parallel systems software.

    SciTech Connect

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  9. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  10. Automated gas chromatography

    DOEpatents

    Mowry, Curtis D.; Blair, Dianna S.; Rodacy, Philip J.; Reber, Stephen D.

    1999-01-01

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute.

  11. Ground based automated telescope

    SciTech Connect

    Colgate, S.A.; Thompson, W.

    1980-01-01

    Recommendation that a ground-based automated telescope of the 2-meter class be built for remote multiuser use as a natural facility. Experience dictates that a primary consideration is a time shared multitasking operating system with virtual memory overlayed with a real time priority interrupt. The primary user facility is a remote terminal networked to the single computer. Many users must have simultaneous time shared access to the computer for program development. The telescope should be rapid slewing, and hence a light weight construction. Automation allows for the closed loop pointing error correction independent of extreme accuracy of the mount.

  12. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Engineering software development was automated using an expert system (rule-based) approach. The use of this technology offers benefits not available from current software development and maintenance methodologies. A workstation was built with a library or program data base with methods for browsing the designs stored; a system for graphical specification of designs including a capability for hierarchical refinement and definition in a graphical design system; and an automated code generation capability in FORTRAN. The workstation was then used in a demonstration with examples from an attitude control subsystem design for the space station. Documentation and recommendations are presented.

  13. Automating the CMS DAQ

    SciTech Connect

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  14. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  15. Automation of analytical isotachophoresis

    NASA Technical Reports Server (NTRS)

    Thormann, Wolfgang

    1985-01-01

    The basic features of automation of analytical isotachophoresis (ITP) are reviewed. Experimental setups consisting of narrow bore tubes which are self-stabilized against thermal convection are considered. Sample detection in free solution is discussed, listing the detector systems presently used or expected to be of potential use in the near future. The combination of a universal detector measuring the evolution of ITP zone structures with detector systems specific to desired components is proposed as a concept of an automated chemical analyzer based on ITP. Possible miniaturization of such an instrument by means of microlithographic techniques is discussed.

  16. Parallel screening and optimization of protein constructs for structural studies

    PubMed Central

    Rasia, Rodolfo M; Noirclerc-Savoye, Marjolaine; Bologna, Nicolás G; Gallet, Benoit; Plevin, Michael J; Blanchard, Laurence; Palatnik, Javier F; Brutscher, Bernhard; Vernet, Thierry; Boisbouvier, Jérôme

    2009-01-01

    A major challenge in structural biology remains the identification of protein constructs amenable to structural characterization. Here, we present a simple method for parallel expression, labeling, and purification of protein constructs (up to 80 kDa) combined with rapid evaluation by NMR spectroscopy. Our approach, which is equally applicable for manual or automated implementation, offers an efficient way to identify and optimize protein constructs for NMR or X-ray crystallographic investigations. PMID:19177520

  17. Parallel Total Energy

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  18. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  19. Parallel Multigrid Equation Solver

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  20. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  1. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  2. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  3. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  4. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  5. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  6. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  7. Parallel Dislocation Simulator

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  8. Human Factors In Aircraft Automation

    NASA Technical Reports Server (NTRS)

    Billings, Charles

    1995-01-01

    Report presents survey of state of art in human factors in automation of aircraft operation. Presents examination of aircraft automation and effects on flight crews in relation to human error and aircraft accidents.

  9. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  10. Library Automation: An Overview.

    ERIC Educational Resources Information Center

    Saffady, William

    1989-01-01

    Surveys the current state of computer applications in six areas of library work: circulation control; descriptive cataloging; catalog maintenance and production; reference services; acquisitions; and serials control. Motives for automation are discussed, and examples of representative vendors, products, and services are given. (15 references) (LRW)

  11. Automation in haemostasis.

    PubMed

    Huber, A R; Méndez, A; Brunner-Agten, S

    2013-01-01

    Automatia, an ancient Greece goddess of luck who makes things happen by themselves and on her own will without human engagement, is present in our daily life in the medical laboratory. Automation has been introduced and perfected by clinical chemistry and since then expanded into other fields such as haematology, immunology, molecular biology and also coagulation testing. The initial small and relatively simple standalone instruments have been replaced by more complex systems that allow for multitasking. Integration of automated coagulation testing into total laboratory automation has become possible in the most recent years. Automation has many strengths and opportunities if weaknesses and threats are respected. On the positive side, standardization, reduction of errors, reduction of cost and increase of throughput are clearly beneficial. Dependence on manufacturers, high initiation cost and somewhat expensive maintenance are less favourable factors. The modern lab and especially the todays lab technicians and academic personnel in the laboratory do not add value for the doctor and his patients by spending lots of time behind the machines. In the future the lab needs to contribute at the bedside suggesting laboratory testing and providing support and interpretation of the obtained results. The human factor will continue to play an important role in testing in haemostasis yet under different circumstances.

  12. Building Automation Systems.

    ERIC Educational Resources Information Center

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  13. Automated CCTV Tester

    2000-09-13

    The purpose of an automated CCTV tester is to automatically and continuously monitor multiple perimeter security cameras for changes in a camera's measured resolution and alignment (camera looking at the proper area). It shall track and record the image quality and position of each camera and produce an alarm when a camera is out of specification.

  14. Blastocyst microinjection automation.

    PubMed

    Mattos, Leonardo S; Grant, Edward; Thresher, Randy; Kluckman, Kimberly

    2009-09-01

    Blastocyst microinjections are routinely involved in the process of creating genetically modified mice for biomedical research, but their efficiency is highly dependent on the skills of the operators. As a consequence, much time and resources are required for training microinjection personnel. This situation has been aggravated by the rapid growth of genetic research, which has increased the demand for mutant animals. Therefore, increased productivity and efficiency in this area are highly desired. Here, we pursue these goals through the automation of a previously developed teleoperated blastocyst microinjection system. This included the design of a new system setup to facilitate automation, the definition of rules for automatic microinjections, the implementation of video processing algorithms to extract feedback information from microscope images, and the creation of control algorithms for process automation. Experimentation conducted with this new system and operator assistance during the cells delivery phase demonstrated a 75% microinjection success rate. In addition, implantation of the successfully injected blastocysts resulted in a 53% birth rate and a 20% yield of chimeras. These results proved that the developed system was capable of automatic blastocyst penetration and retraction, demonstrating the success of major steps toward full process automation.

  15. Library Automation in Australia.

    ERIC Educational Resources Information Center

    Blank, Karen L.

    1984-01-01

    Discussion of Australia's move toward library automation highlights development of a national bibliographic network, local and regional cooperation, integrated library systems, telecommunications, and online systems, as well as microcomputer usage, ergonomics, copyright issues, and national information policy. Information technology plans of the…

  16. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  17. Mining Your Automated System.

    ERIC Educational Resources Information Center

    Larsen, Patricia M., Ed.; And Others

    1996-01-01

    Four articles address issues of collecting, compiling, reporting, and interpreting statistics generated by automated library systems for administrative decision making. Topics include using a management information system to forecast growth and assess areas for downsizing; statistics for collection development and analysis; and online system…

  18. Automated conflict resolution issues

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey S.

    1991-01-01

    A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.

  19. Automating Food Service.

    ERIC Educational Resources Information Center

    Kavulla, Timothy A.

    1986-01-01

    The Wichita, Kansas, Public Schools' Food Service Department Project Reduction in Paperwork (RIP) is designed to automate certain paperwork functions, thus reducing cost and flow of paper. This article addresses how RIP manages free/reduced meal applications and meets the objectives of reducing paper and increasing accuracy, timeliness, and…

  20. Automated Estimating System (AES)

    SciTech Connect

    Holder, D.A.

    1989-09-01

    This document describes Version 3.1 of the Automated Estimating System, a personal computer-based software package designed to aid in the creation, updating, and reporting of project cost estimates for the Estimating and Scheduling Department of the Martin Marietta Energy Systems Engineering Division. Version 3.1 of the Automated Estimating System is capable of running in a multiuser environment across a token ring network. The token ring network makes possible services and applications that will more fully integrate all aspects of information processing, provides a central area for large data bases to reside, and allows access to the data base by multiple users. Version 3.1 of the Automated Estimating System also has been enhanced to include an Assembly pricing data base that may be used to retrieve cost data into an estimate. A WBS Title File program has also been included in Version 3.1. The WBS Title File program allows for the creation of a WBS title file that has been integrated with the Automated Estimating System to provide WBS titles in update mode and in reports. This provides for consistency in WBS titles and provides the capability to display WBS titles on reports generated at a higher WBS level.

  1. Automated Administrative Data Bases

    NASA Technical Reports Server (NTRS)

    Marrie, M. D.; Jarrett, J. R.; Reising, S. A.; Hodge, J. E.

    1984-01-01

    Improved productivity and more effective response to information requirements for internal management, NASA Centers, and Headquarters resulted from using automated techniques. Modules developed to provide information on manpower, RTOPS, full time equivalency, and physical space reduced duplication, increased communication, and saved time. There is potential for greater savings by sharing and integrating with those who have the same requirements.

  2. Automating Small Libraries.

    ERIC Educational Resources Information Center

    Swan, James

    1996-01-01

    Presents a four-phase plan for small libraries strategizing for automation: inventory and weeding, data conversion, implementation, and enhancements. Other topics include selecting a system, MARC records, compatibility, ease of use, industry standards, searching capabilities, support services, system security, screen displays, circulation modules,…

  3. CLAN Automation Plan.

    ERIC Educational Resources Information Center

    Nevada State Library and Archives, Carson City.

    The Central Libraries Automated Network (CLAN) of Nevada is a cooperative system which shares circulation, cataloging, and acquisitions systems and numerous online databases. Its mission is to provide public access to information and efficient library administration through shared computer systems, databases, and telecommunications. This document…

  4. Automated EEG acquisition

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.; Hillman, C. E., Jr.

    1977-01-01

    Automated self-contained portable device can be used by technicians with minimal training. Data acquired from patient at remote site are transmitted to centralized interpretation center using conventional telephone equipment. There, diagnostic information is analyzed, and results are relayed back to remote site.

  5. Automated Essay Scoring

    ERIC Educational Resources Information Center

    Dikli, Semire

    2006-01-01

    The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…

  6. Parallel processing of remotely sensed data: Application to the ATSR-2 instrument

    NASA Astrophysics Data System (ADS)

    Simpson, J.; McIntire, T.; Berg, J.; Tsou, Y.

    2007-01-01

    Massively parallel computational paradigms can mitigate many issues associated with the analysis of large and complex remotely sensed data sets. Recently, the Beowulf cluster has emerged as the most attractive, massively parallel architecture due to its low cost and high performance. Whereas most Beowulf designs have emphasized numerical modeling applications, the Parallel Image Processing Environment (PIPE) specifically addresses the unique requirements of remote sensing applications. Automated, parallelization of user-defined analyses is fully supported. A neural network application, applied to Along Track Scanning Radiometer-2 (ATSR-2) data shows the advantages and performance characteristics of PIPE.

  7. Embryoid Body-Explant Outgrowth Cultivation from Induced Pluripotent Stem Cells in an Automated Closed Platform

    PubMed Central

    Tone, Hiroshi; Yoshioka, Saeko; Akiyama, Hirokazu; Nishimura, Akira; Ichimura, Masaki; Nakatani, Masaru; Kiyono, Tohru

    2016-01-01

    Automation of cell culture would facilitate stable cell expansion with consistent quality. In the present study, feasibility of an automated closed-cell culture system “P 4C S” for an embryoid body- (EB-) explant outgrowth culture was investigated as a model case for explant culture. After placing the induced pluripotent stem cell- (iPSC-) derived EBs into the system, the EBs successfully adhered to the culture surface and the cell outgrowth was clearly observed surrounding the adherent EBs. After confirming the outgrowth, we carried out subculture manipulation, in which the detached cells were simply dispersed by shaking the culture flask, leading to uniform cell distribution. This enabled continuous stable cell expansion, resulting in a cell yield of 3.1 × 107. There was no evidence of bacterial contamination throughout the cell culture experiments. We herewith developed the automated cultivation platform for EB-explant outgrowth cells.

  8. Embryoid Body-Explant Outgrowth Cultivation from Induced Pluripotent Stem Cells in an Automated Closed Platform.

    PubMed

    Tone, Hiroshi; Yoshioka, Saeko; Akiyama, Hirokazu; Nishimura, Akira; Ichimura, Masaki; Nakatani, Masaru; Kiyono, Tohru; Toyoda, Masashi; Watanabe, Masatoshi; Umezawa, Akihiro

    2016-01-01

    Automation of cell culture would facilitate stable cell expansion with consistent quality. In the present study, feasibility of an automated closed-cell culture system "P 4C S" for an embryoid body- (EB-) explant outgrowth culture was investigated as a model case for explant culture. After placing the induced pluripotent stem cell- (iPSC-) derived EBs into the system, the EBs successfully adhered to the culture surface and the cell outgrowth was clearly observed surrounding the adherent EBs. After confirming the outgrowth, we carried out subculture manipulation, in which the detached cells were simply dispersed by shaking the culture flask, leading to uniform cell distribution. This enabled continuous stable cell expansion, resulting in a cell yield of 3.1 × 10(7). There was no evidence of bacterial contamination throughout the cell culture experiments. We herewith developed the automated cultivation platform for EB-explant outgrowth cells. PMID:27648449

  9. Embryoid Body-Explant Outgrowth Cultivation from Induced Pluripotent Stem Cells in an Automated Closed Platform

    PubMed Central

    Tone, Hiroshi; Yoshioka, Saeko; Akiyama, Hirokazu; Nishimura, Akira; Ichimura, Masaki; Nakatani, Masaru; Kiyono, Tohru

    2016-01-01

    Automation of cell culture would facilitate stable cell expansion with consistent quality. In the present study, feasibility of an automated closed-cell culture system “P 4C S” for an embryoid body- (EB-) explant outgrowth culture was investigated as a model case for explant culture. After placing the induced pluripotent stem cell- (iPSC-) derived EBs into the system, the EBs successfully adhered to the culture surface and the cell outgrowth was clearly observed surrounding the adherent EBs. After confirming the outgrowth, we carried out subculture manipulation, in which the detached cells were simply dispersed by shaking the culture flask, leading to uniform cell distribution. This enabled continuous stable cell expansion, resulting in a cell yield of 3.1 × 107. There was no evidence of bacterial contamination throughout the cell culture experiments. We herewith developed the automated cultivation platform for EB-explant outgrowth cells. PMID:27648449

  10. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  11. One-step sample preparation of positive blood cultures for the direct detection of methicillin-sensitive and -resistant Staphylococcus aureus and methicillin-resistant coagulase-negative staphylococci within one hour using the automated GenomEra CDX™ PCR system.

    PubMed

    Hirvonen, J J; von Lode, P; Nevalainen, M; Rantakokko-Jalava, K; Kaukoranta, S-S

    2012-10-01

    A method for the rapid detection of methicillin-sensitive and -resistant Staphylococcus aureus (MSSA and MRSA, respectively) and methicillin-resistant coagulase-negative staphylococci (MRCoNS) with a straightforward sample preparation protocol of blood cultures using an automated homogeneous polymerase chain reaction (PCR) assay, the GenomEra™ MRSA/SA (Abacus Diagnostica Oy, Turku, Finland), is presented. In total, 316 BacT/Alert (bioMérieux, Marcy l'Etoile, France) and 433 BACTEC (Becton Dickinson, Sparks, MD, USA) blood culture bottles were analyzed, including 725 positive cultures containing Gram-positive cocci in clusters (n = 419) and other Gram stain forms (n = 361), as well as 24 signal- and growth-negative bottles. Detection sensitivities for MSSA, MRSA, and MRCoNS were 99.4 % (158/159), 100.0 % (9/9), and 99.3 % (132/133), respectively. One false-positive MRSA result was detected from a non-staphylococci-containing bottle, yielding a specificity of 99.8 %. The lowest detectable amount of viable cells in the blood culture sample was 4 × 10(4) CFU/mL. The results were available within one hour after microbial growth detection and the two-step, time-resolved fluorometric (TRF) measurement mode employed by the GenomEra CDX™ instrument showed no interference from blood, charcoal, or culture media. The method described lacks all sample purification steps and allows reliable and simplified pathogen detection also in clinical microbiology laboratory settings without specialized molecular microbiology competence.

  12. Parallel Consensual Neural Networks

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

    1993-01-01

    A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

  13. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  14. Parallel grid population

    SciTech Connect

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  15. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  16. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  17. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  19. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  20. Collisionless parallel shocks

    NASA Technical Reports Server (NTRS)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  1. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  2. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  3. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  4. Automated gas chromatography

    DOEpatents

    Mowry, C.D.; Blair, D.S.; Rodacy, P.J.; Reber, S.D.

    1999-07-13

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute. 7 figs.

  5. Automated theorem proving.

    PubMed

    Plaisted, David A

    2014-03-01

    Automated theorem proving is the use of computers to prove or disprove mathematical or logical statements. Such statements can express properties of hardware or software systems, or facts about the world that are relevant for applications such as natural language processing and planning. A brief introduction to propositional and first-order logic is given, along with some of the main methods of automated theorem proving in these logics. These methods of theorem proving include resolution, Davis and Putnam-style approaches, and others. Methods for handling the equality axioms are also presented. Methods of theorem proving in propositional logic are presented first, and then methods for first-order logic. WIREs Cogn Sci 2014, 5:115-128. doi: 10.1002/wcs.1269 CONFLICT OF INTEREST: The authors has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304304

  6. Automated macromolecular crystallization screening

    DOEpatents

    Segelke, Brent W.; Rupp, Bernhard; Krupka, Heike I.

    2005-03-01

    An automated macromolecular crystallization screening system wherein a multiplicity of reagent mixes are produced. A multiplicity of analysis plates is produced utilizing the reagent mixes combined with a sample. The analysis plates are incubated to promote growth of crystals. Images of the crystals are made. The images are analyzed with regard to suitability of the crystals for analysis by x-ray crystallography. A design of reagent mixes is produced based upon the expected suitability of the crystals for analysis by x-ray crystallography. A second multiplicity of mixes of the reagent components is produced utilizing the design and a second multiplicity of reagent mixes is used for a second round of automated macromolecular crystallization screening. In one embodiment the multiplicity of reagent mixes are produced by a random selection of reagent components.

  7. Automated breeder fuel fabrication

    SciTech Connect

    Goldmann, L.H.; Frederickson, J.R.

    1983-09-01

    The objective of the Secure Automated Fabrication (SAF) Project is to develop remotely operated equipment for the processing and manufacturing of breeder reactor fuel pins. The SAF line will be installed in the Fuels and Materials Examination Facility (FMEF). The FMEF is presently under construction at the Department of Energy's (DOE) Hanford site near Richland, Washington, and is operated by the Westinghouse Hanford Company (WHC). The fabrication and support systems of the SAF line are designed for computer-controlled operation from a centralized control room. Remote and automated fuel fabriction operations will result in: reduced radiation exposure to workers; enhanced safeguards; improved product quality; near real-time accountability, and increased productivity. The present schedule calls for installation of SAF line equipment in the FMEF beginning in 1984, with qualifying runs starting in 1986 and production commencing in 1987. 5 figures.

  8. The automation of science.

    PubMed

    King, Ross D; Rowland, Jem; Oliver, Stephen G; Young, Michael; Aubrey, Wayne; Byrne, Emma; Liakata, Maria; Markham, Magdalena; Pir, Pinar; Soldatova, Larisa N; Sparkes, Andrew; Whelan, Kenneth E; Clare, Amanda

    2009-04-01

    The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist "Adam," which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge. PMID:19342587

  9. Compact reactor design automation

    NASA Technical Reports Server (NTRS)

    Nassersharif, Bahram; Gaeta, Michael J.

    1991-01-01

    A conceptual compact reactor design automation experiment was performed using the real-time expert system G2. The purpose of this experiment was to investigate the utility of an expert system in design; in particular, reactor design. The experiment consisted of the automation and integration of two design phases: reactor neutronic design and fuel pin design. The utility of this approach is shown using simple examples of formulating rules to ensure design parameter consistency between the two design phases. The ability of G2 to communicate with external programs even across networks provides the system with the capability of supplementing the knowledge processing features with conventional canned programs with possible applications for realistic iterative design tools.

  10. Automated campaign system

    NASA Astrophysics Data System (ADS)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  11. Automated assembly in space

    NASA Technical Reports Server (NTRS)

    Srivastava, Sandanand; Dwivedi, Suren N.; Soon, Toh Teck; Bandi, Reddy; Banerjee, Soumen; Hughes, Cecilia

    1989-01-01

    The installation of robots and their use of assembly in space will create an exciting and promising future for the U.S. Space Program. The concept of assembly in space is very complicated and error prone and it is not possible unless the various parts and modules are suitably designed for automation. Certain guidelines are developed for part designing and for an easy precision assembly. Major design problems associated with automated assembly are considered and solutions to resolve these problems are evaluated in the guidelines format. Methods for gripping and methods for part feeding are developed with regard to the absence of gravity in space. The guidelines for part orientation, adjustments, compliances and various assembly construction are discussed. Design modifications of various fasteners and fastening methods are also investigated.

  12. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  13. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  14. Cavendish Balance Automation

    NASA Technical Reports Server (NTRS)

    Thompson, Bryan

    2000-01-01

    This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.

  15. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  16. Automation in biological crystallization.

    PubMed

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  17. Automation in biological crystallization

    PubMed Central

    Shaw Stewart, Patrick; Mueller-Dieckmann, Jochen

    2014-01-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given. PMID:24915074

  18. Automated measurement and quantification of heterotrophic bacteria in water samples based on the MPN method.

    PubMed

    Fuchsluger, C; Preims, M; Fritz, I

    2011-01-01

    Quantification of heterotrophic bacteria is a widely used measure for water analysis. Especially in terms of drinking water analysis, testing for microorganisms is strictly regulated by the European Drinking Water Directive, including quality criteria and detection limits. The quantification procedure presented in this study is based on the most probable number (MPN) method, which was adapted to comply with the need for a quick and easy screening tool for different kinds of water samples as well as varying microbial loads. Replacing tubes with 24-well titer plates for cultivation of bacteria drastically reduces the amount of culture media and also simplifies incubation. Automated photometric measurement of turbidity instead of visual evaluation of bacterial growth avoids misinterpretation by operators. Definition of a threshold ensures definite and user-independent determination of microbial growth. Calculation of the MPN itself is done using a program provided by the US Food and Drug Administration (FDA). For evaluation of the method, real water samples of different origins as well as pure cultures of bacteria were analyzed in parallel with the conventional plating methods. Thus, the procedure described requires less preparation time, reduces costs and ensures both stable and reliable results for water samples. PMID:20835882

  19. Automating FEA programming

    NASA Technical Reports Server (NTRS)

    Sharma, Naveen

    1992-01-01

    In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.

  20. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  1. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  2. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  3. Parallel processing and medium-scale multiprocessors

    SciTech Connect

    Wouk, A.

    1989-01-01

    For some time, the community interested in large-scale scientific computing has been attempting to come to terms with parallel computation using a number of processors sufficient to make their concurrent utilization interesting, challenging, and, in the long run, beneficial. Unexpected consequences of parallelization have been discovered. It is possible to obtain reduced performance, both relative and absolute, from an increased number of processors, as a result of inappropriate use of resources in a multiprocessor environment. This exemplifies one of the paradoxes which result from our cultural bias towards sequential thought processes. As a consequence there is a bias for sequential styles of program development in a multiprocessor environment. The authors have learned that the problem of automatic optimization in compilation of parallel programs is computationally hard. Early hopes that automatic, optimal parallelization of sequentially conceived programs would be as achievable as earlier automatic vectorization had been, have been dashed. The authors lack the insights and folklore which are needed to develop useful methodologies and heuristics in the area of parallel computation. The authors are embarked on a voyage of exploration of this new territory, and the work described in this volume can provide helpful guidance. The authors have to explore fully the differences between distributed memory systems, shared memory systems, and combinations, as well as the relative applicability of SIMD and MIMD architectures. Based on the information obtained in such exploration, useful steps towards efficient utilization of many processors should become possible. This paper covers several areas: systems programming, parallel/language/programming systems, and applications programming.

  4. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  5. Development and evaluation of systems for controlling parallel high di/dt thyratrons

    SciTech Connect

    Litton. A.; McDuff, G.

    1982-01-01

    Increasing numbers of high power, high repetition rate applications dictate the use or thyratrons in multiple of hard parallel configurations to achieve the required rate of current rise, di/dt. This in turn demands the development of systems to control parallel thyratron commutation with nanosecond accuracy. Such systems must be capable of real-time, fully-automated control in multi-kilohertz applications while still remaining cost effective. This paper describes the evolution of such a control methodology and system.

  6. Parallelized nested sampling

    NASA Astrophysics Data System (ADS)

    Henderson, R. Wesley; Goggans, Paul M.

    2014-12-01

    One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

  7. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  8. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  9. Parallel Kinematic Machines (PKM)

    SciTech Connect

    Henry, R.S.

    2000-03-17

    The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

  10. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  11. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  12. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  13. Filter Circuit Design by Parallel Genetic Programming

    NASA Astrophysics Data System (ADS)

    Yano, Yuichi; Kato, Toshiji; Inoue, Kaoru; Miki, Mitsunori

    Genetic Programming (GP) is an extension of Genetic Algorithm(GA) to handle more structural problems. In this paper, an approach to filter circuit design by GP is proposed. By designing a gene which includes not only the parameters of consisting elements, but also the structural information of the circuit, it becomes possible to apply the proposed approach to various types of filter circuits. GP depends much on trial and error due to its probabilitic nature. To decrease this uncertainty and ensure the progress of the evolution, Parallel GP with multiple populations with the island model is also proposed. An MPI-based cluster system is used for realization of this parallel computing where each island correspondsd to each node. A lowpass and an asymmetric bandpass filters are designed. One hundred times of trials for multiple populations with and without migrations are tested in the design of lowpass filter to confirm the validity of the proposed method. In the asymmetric bandpass filter design, the results are compared with those of the circuit designed by hand to confirm the effectiveness of the proposed method. The proposed approach is applicable to various types of filter circuits. It can contribute to an automated design procedure, where it would require a expirenced designer if done by hand. It is also possible to obtain a new circuit design which would not be possible if done by hand.

  14. Theory and practice of parallel direct optimization.

    PubMed

    Janies, Daniel A; Wheeler, Ward C

    2002-01-01

    Our ability to collect and distribute genomic and other biological data is growing at a staggering rate (Pagel, 1999). However, the synthesis of these data into knowledge of evolution is incomplete. Phylogenetic systematics provides a unifying intellectual approach to understanding evolution but presents formidable computational challenges. A fundamental goal of systematics, the generation of evolutionary trees, is typically approached as two distinct NP-complete problems: multiple sequence alignment and phylogenetic tree search. The number of cells in a multiple alignment matrix are exponentially related to sequence length. In addition, the number of evolutionary trees expands combinatorially with respect to the number of organisms or sequences to be examined. Biologically interesting datasets are currently comprised of hundreds of taxa and thousands of nucleotides and morphological characters. This standard will continue to grow with the advent of highly automated sequencing and development of character databases. Three areas of innovation are changing how evolutionary computation can be addressed: (1) novel concepts for determination of sequence homology, (2) heuristics and shortcuts in tree-search algorithms, and (3) parallel computing. In this paper and the online software documentation we describe the basic usage of parallel direct optimization as implemented in the software POY (ftp://ftp.amnh.org/pub/molecular/poy).

  15. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  16. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  17. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  18. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  19. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  20. Toward the automation of road networks extraction processes

    NASA Astrophysics Data System (ADS)

    Leymarie, Frederic; Boichis, Nicolas; Airault, Sylvain; Jamet, Olivier

    1996-12-01

    Syseca and IGN are working on various steps in the ongoing march from digital photogrammetry to the semi-automation and ultimately the full automation of data manipulation, i.e., capture and analysis. The immediate goals are to reduce the production costs and the data availability delays. Within this context, we have tackle the distinctive problem of 'automated road network extraction.' The methodology adopted is to first study semi-automatic solutions which probably increase the global efficiency of human operators in topographic data capture; in a second step, automatic solutions are designed based upon the gained experience. We report on different (semi-)automatic solutions for the road following algorithm. One key aspect of our method is to have the stages of 'detection' and 'geometric recovery' cooperate together while remaining distinct. 'Detection' is based on a local (texture) analysis of the image, while 'geometric recovery' is concerned with the extraction of 'road objects' for both monocular and stereo information. 'Detection' is a low-level visual process, 'reasoning' directly at the level of image intensities, while the mid-level visual process, 'geometric recovery', uses contextual knowledge about roads, both generic, e.g. parallelism of borders, and specific, e.g. using previously extracted road segments and disparities. We then pursue our 'march' by reporting on steps we are exploring toward full automation. We have in particular made attempts at tackling the automation of the initialization step to start searching in a valid direction.

  1. Automation in organizations: Eternal conflict

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  2. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  3. [Automated anesthesia record system].

    PubMed

    Zhu, Tao; Liu, Jin

    2005-12-01

    Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.

  4. Automated fiber pigtailing machine

    DOEpatents

    Strand, O.T.; Lowry, M.E.

    1999-01-05

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectronic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems. 26 figs.

  5. Automated fiber pigtailing machine

    DOEpatents

    Strand, Oliver T.; Lowry, Mark E.

    1999-01-01

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectonic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems.

  6. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    2000-01-01

    An automated propellant blending apparatus and method that uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation is discussed. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  7. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    1999-01-01

    An automated propellant blending apparatus and method uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  8. The Automated Medical Office

    PubMed Central

    Petreman, Mel

    1990-01-01

    With shock and surprise many physicians learned in the 1980s that they must change the way they do business. Competition for patients, increasing government regulation, and the rapidly escalating risk of litigation forces physicians to seek modern remedies in office management. The author describes a medical clinic that strives to be paperless using electronic innovation to solve the problems of medical practice management. A computer software program to automate information management in a clinic shows that practical thinking linked to advanced technology can greatly improve office efficiency. PMID:21233899

  9. Automated Hazard Analysis

    2003-06-26

    The Automated Hazard Analysis (AHA) application is a software tool used to conduct job hazard screening and analysis of tasks to be performed in Savannah River Site facilities. The AHA application provides a systematic approach to the assessment of safety and environmental hazards associated with specific tasks, and the identification of controls regulations, and other requirements needed to perform those tasks safely. AHA is to be integrated into existing Savannah River site work control andmore » job hazard analysis processes. Utilization of AHA will improve the consistency and completeness of hazard screening and analysis, and increase the effectiveness of the work planning process.« less

  10. The automated medical office.

    PubMed

    Petreman, M

    1990-08-01

    With shock and surprise many physicians learned in the 1980s that they must change the way they do business. Competition for patients, increasing government regulation, and the rapidly escalating risk of litigation forces physicians to seek modern remedies in office management. The author describes a medical clinic that strives to be paperless using electronic innovation to solve the problems of medical practice management. A computer software program to automate information management in a clinic shows that practical thinking linked to advanced technology can greatly improve office efficiency.

  11. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  12. World-wide distribution automation systems

    SciTech Connect

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  13. Phaser.MRage: automated molecular replacement

    SciTech Connect

    Bunkóczi, Gábor; McCoy, Airlie J.; Oeffner, Robert D.; Read, Randy J.

    2013-11-01

    The functionality of the molecular-replacement pipeline phaser.MRage is introduced and illustrated with examples. Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.

  14. Comparison of an automated pattern analysis machine vision time-lapse system with traditional endpoint measurements in the analysis of cell growth and cytotoxicity.

    PubMed

    Toimela, Tarja; Tähti, Hanna; Ylikomi, Timo

    2008-07-01

    Machine vision is an application of computer vision. It both collects visual information and interprets the images. Although the machine obviously does not 'see' in the same sense that humans do, it is possible to acquire visual information and to create programmes to identify relevant image features in an effective and consistent manner. Machine vision is widely applied in industrial automation, but here we describe how we have used it to monitor and interpret data from cell cultures. The machine vision system used (Cell-IQ) consisted of an inbuilt atmosphere-controlled incubator, where cell culture plates were placed during the test. Artificial intelligence (AI) software, which uses machine vision technology, took care of the follow-up analysis of cellular morphological changes. Basic endpoint and staining methods to evaluate the condition of the cells, were conducted in parallel to the machine vision analysis. The results showed that the automated system for pattern analysis of morphological changes yielded comparable results to those obtained by conventional methods. The inbuilt software analysis offers a promising way of evaluating cell growth and various cell phases. The continuous follow-up and label-free analysis, as well as the possibility of measuring multiple parameters simultaneously from the same cell population, were major advantages of this system, as compared to conventional endpoint measurement methodology.

  15. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  16. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  17. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  18. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  19. Unified Parallel Software

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  20. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  1. Automated System Marketplace 1995: The Changing Face of Automation.

    ERIC Educational Resources Information Center

    Barry, Jeff; And Others

    1995-01-01

    Discusses trends in the automated system marketplace with specific attention to online vendors and their customers: academic, public, school, and special libraries. Presents vendor profiles; tables and charts on computer systems and sales; and sidebars that include a vendor source list and the differing views on procuring an automated library…

  2. Automation of Data Traffic Control on DSM Architecture

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2001-01-01

    The design of distributed shared memory (DSM) computers liberates users from the duty to distribute data across processors and allows for the incremental development of parallel programs using, for example, OpenMP or Java threads. DSM architecture greatly simplifies the development of parallel programs having good performance on a few processors. However, to achieve a good program scalability on DSM computers requires that the user understand data flow in the application and use various techniques to avoid data traffic congestions. In this paper we discuss a number of such techniques, including data blocking, data placement, data transposition and page size control and evaluate their efficiency on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks. We also present a tool which automates the detection of constructs causing data congestions in Fortran array oriented codes and advises the user on code transformations for improving data traffic in the application.

  3. Automated landmark-guided deformable image registration

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  4. Sunglint Detection for Unmanned and Automated Platforms

    PubMed Central

    Garaba, Shungudzemwoyo Pascal; Schulz, Jan; Wernand, Marcel Robert; Zielinski, Oliver

    2012-01-01

    We present an empirical quality control protocol for above-water radiometric sampling focussing on identifying sunglint situations. Using hyperspectral radiometers, measurements were taken on an automated and unmanned seaborne platform in northwest European shelf seas. In parallel, a camera system was used to capture sea surface and sky images of the investigated points. The quality control consists of meteorological flags, to mask dusk, dawn, precipitation and low light conditions, utilizing incoming solar irradiance (ES) spectra. Using 629 from a total of 3,121 spectral measurements that passed the test conditions of the meteorological flagging, a new sunglint flag was developed. To predict sunglint conspicuous in the simultaneously available sea surface images a sunglint image detection algorithm was developed and implemented. Applying this algorithm, two sets of data, one with (having too much or detectable white pixels or sunglint) and one without sunglint (having least visible/detectable white pixel or sunglint), were derived. To identify the most effective sunglint flagging criteria we evaluated the spectral characteristics of these two data sets using water leaving radiance (LW) and remote sensing reflectance (RRS). Spectral conditions satisfying ‘mean LW (700–950 nm) < 2 mW·m−2·nm−1·Sr−1’ or alternatively ‘minimum RRS (700–950 nm) < 0.010 Sr−1’, mask most measurements affected by sunglint, providing an efficient empirical flagging of sunglint in automated quality control.

  5. Automated landmark-guided deformable image registration.

    PubMed

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. PMID:25479095

  6. Automated code compilation via the Release Manager

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.

    2006-07-01

    GLAST performs automated builds of its C++ code base. These builds reflect a three-tiered approach to code development, allowing us to test and create releases, as well as get a view of new code submissions that will eventually make it into releases. The program responsible for these automated builds is called the Release Manager. It is based on code originally written by Alex Schlessinger. Its main purpose is to provide rapid feedback for developers when code changes occur. It consists of three loosely connected pieces: The batch submission interface, the Workflow manager, and the Release Manager scripts. The batch submission interface is responsible for keeping track of submitted batch jobs and notifying users/programs using various methods when jobs change status. The Release Manager relies heavily on this interface to allow code builds to happen on demand and in parallel. The workflow manager is a generic program responsible for moving from one state to another based on criteria defined. These states are executed using the batch submission program. Finally the Release Manager consists of scripts that are registered as different states in the Workflow Manager. The Release Manager is currently able to run on Linux and Windows. It uses a MySQL database to record its information. It is currently tightly tied to GLAST's build tool, CMT. Other purposes of the Release Manager are to create source packages for developers and binary packages for end users.

  7. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  8. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; Illsley, Jeannette

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  9. Space station advanced automation

    NASA Technical Reports Server (NTRS)

    Woods, Donald

    1990-01-01

    In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software.

  10. Automated office blood pressure.

    PubMed

    Myers, Martin G; Godwin, Marshall

    2012-05-01

    Manual blood pressure (BP) is gradually disappearing from clinical practice with the mercury sphygmomanometer now considered to be an environmental hazard. Manual BP is also subject to measurement error on the part of the physician/nurse and patient-related anxiety which can result in poor quality BP measurements and office-induced (white coat) hypertension. Automated office (AO) BP with devices such as the BpTRU (BpTRU Medical Devices, Coquitlam, BC) has already replaced conventional manual BP in many primary care practices in Canada and has also attracted interest in other countries where research studies using AOBP have been undertaken. The basic principles of AOBP include multiple readings taken with a fully automated recorder with the patient resting alone in a quiet room. When these principles are followed, office-induced hypertension is eliminated and AOBP exhibits a much stronger correlation with the awake ambulatory BP as compared with routine manual BP measurements. Unlike routine manual BP, AOBP correlates as well with left ventricular mass as does the awake ambulatory BP. AOBP also simplifies the definition of hypertension in that the cut point for a normal AOBP (< 135/85 mm Hg) is the same as for the awake ambulatory BP and home BP. This article summarizes the currently available evidence supporting the use of AOBP in routine clinical practice and proposes an algorithm in which AOBP replaces manual BP for the diagnosis and management of hypertension. PMID:22265230

  11. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  12. Computer automated design and computer automated manufacture.

    PubMed

    Brncick, M

    2000-08-01

    The introduction of computer aided design and computer aided manufacturing into the field of prosthetics and orthotics did not arrive without concern. Many prosthetists feared that the computer would provide other allied health practitioners who had little or no experience in prosthetics the ability to fit and manage amputees. Technicians in the field felt their jobs may be jeopardized by automated fabrication techniques. This has not turned out to be the case. Prosthetists who use CAD-CAM techniques are finding they have more time for patient care and clinical assessment. CAD-CAM is another tool for them to provide better care for the patients/clients they serve. One of the factors that deterred the acceptance of CAD-CAM techniques in its early stages was that of cost. It took a significant investment in software and hardware for the prosthetists to begin to use the new systems. This new technique was not reimbursed by insurance coverage. Practitioners did not have enough information about this new technique to make a sound decision on their investment of time and money. Ironically, it is the need to hold health care costs down that may prove to be the catalyst for the increased use of CAD-CAM in the field. Providing orthoses and prostheses to patients who require them is a very labor intensive process. Practitioners are looking for better, faster, and more economical ways in which to provide their services under the pressure of managed care. CAD-CAM may be the answer. The author foresees shape sensing departments in hospitals where patients would be sent to be digitized, similar to someone going for radiograph or ultrasound. Afterwards, an orthosis or prosthesis could be provided from a central fabrication facility at a remote site, most likely on the same day. Not long ago, highly skilled practitioners with extensive technical ability would custom make almost every orthosis. One now practices in an atmosphere where off-the-shelf orthoses are the standard. This

  13. A Demonstration of Automated DNA Sequencing.

    ERIC Educational Resources Information Center

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  14. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Baxter, Doug

    1988-01-01

    The class of problems that can be effectively compiled by parallelizing compilers is discussed. This is accomplished with the doconsider construct which would allow these compilers to parallelize many problems in which substantial loop-level parallelism is available but cannot be detected by standard compile-time analysis. We describe and experimentally analyze mechanisms used to parallelize the work required for these types of loops. In each of these methods, a new loop structure is produced by modifying the loop to be parallelized. We also present the rules by which these loop transformations may be automated in order that they be included in language compilers. The main application area of the research involves problems in scientific computations and engineering. The workload used in our experiment includes a mixture of real problems as well as synthetically generated inputs. From our extensive tests on the Encore Multimax/320, we have reached the conclusion that for the types of workloads we have investigated, self-execution almost always performs better than pre-scheduling. Further, the improvement in performance that accrues as a result of global topological sorting of indices as opposed to the less expensive local sorting, is not very significant in the case of self-execution.

  15. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-01

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  16. Robotics/Automated Systems Technicians.

    ERIC Educational Resources Information Center

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  17. Automated Test-Form Generation

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  18. Opening up Library Automation Software

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  19. Automated Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.

    1992-01-01

    Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.

  20. Automating a clinical management system.

    PubMed

    Gordon, B; Braun, D

    1990-06-01

    Automating the clinical documentation of a home health care agency will prove crucial as the industry continues to grow and becomes increasingly complex. Kimberly Quality Care, a large, multi-office home care company, made a major commitment to the automation of its clinical management documents.

  1. Translation: Aids, Robots, and Automation.

    ERIC Educational Resources Information Center

    Andreyewsky, Alexander

    1981-01-01

    Examines electronic aids to translation both as ways to automate it and as an approach to solve problems resulting from shortage of qualified translators. Describes the limitations of robotic MT (Machine Translation) systems, viewing MAT (Machine-Aided Translation) as the only practical solution and the best vehicle for further automation. (MES)

  2. Progress Toward Automated Cost Estimation

    NASA Technical Reports Server (NTRS)

    Brown, Joseph A.

    1992-01-01

    Report discusses efforts to develop standard system of automated cost estimation (ACE) and computer-aided design (CAD). Advantage of system is time saved and accuracy enhanced by automating extraction of quantities from design drawings, consultation of price lists, and application of cost and markup formulas.

  3. Automated Circulation. SPEC Kit 43.

    ERIC Educational Resources Information Center

    Association of Research Libraries, Washington, DC. Office of Management Studies.

    Of the 64 libraries responding to a 1978 Association of Research Libraries (ARL) survey, 37 indicated that they used automated circulation systems; half of these were commercial systems, and most were batch-process or combination batch process and online. Nearly all libraries without automated systems cited lack of funding as the reason for not…

  4. Detection of Salmonella from chicken rinses and chicken hot dogs with the automated BAX PCR system.

    PubMed

    Bailey, J S; Cosby, D E

    2003-11-01

    The BAX system with automated PCR detection was compared with standard cultural procedures for the detection of naturally occurring and spiked Salmonella in 183 chicken carcass rinses and 90 chicken hot dogs. The automated assay procedure consists of overnight growth (16 to 18 h) of the sample in buffered peptone broth at 35 degrees C, transfer of the sample to lysis tubes, incubation and lysis of the cells, transfer of the sample to PCR tubes, and placement of tubes into the cycler-detector, which runs automatically. The automated PCR detection assay takes about 4 h after 16 to 24 h of overnight preenrichment. The culture procedure consists of preerichment, enrichment, plating, and serological confirmation and takes about 72 h. Three trials involving 10 to 31 samples were carried out for each product. Some samples were spiked with Salmonella Typhimurium, Salmonella Heidelberg, Salmonella Montevideo, and Salmonella Enteritidis at 1 to 250 cells per ml of rinse or 1 to 250 cells per g of meat. For unspiked chicken rinses, Salmonella was detected in 2 of 61 samples with the automated system and in 1 of 61 samples with the culture method. Salmonella was recovered from 111 of 122 spiked samples with the automated PCR system and from 113 of 122 spiked samples with the culture method. For chicken hot dogs, Salmonella was detected in all 60 of the spiked samples with both the automated PCR and the culture procedures. For the 30 unspiked samples, Salmonella was recovered from 19 samples with the automated PCR system and from 10 samples with the culture method. The automated PCR system provided reliable Salmonella screening of chicken product samples within 24 h.

  5. Automated design of aerospace structures

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Mccomb, H. G.

    1974-01-01

    The current state-of-the-art in structural analysis of aerospace vehicles is characterized, automated design technology is discussed, and an indication is given of the future direction of research in analysis and automated design. Representative computer programs for analysis typical of those in routine use in vehicle design activities are described, and results are shown for some selected analysis problems. Recent and planned advances in analysis capability are indicated. Techniques used to automate the more routine aspects of structural design are discussed, and some recently developed automated design computer programs are described. Finally, discussion is presented of early accomplishments in interdisciplinary automated design systems, and some indication of the future thrust of research in this field is given.

  6. Automated Desalting Apparatus

    NASA Technical Reports Server (NTRS)

    Spencer, Maegan K.; Liu, De-Ling; Kanik, Isik; Beegle, Luther

    2010-01-01

    Because salt and metals can mask the signature of a variety of organic molecules (like amino acids) in any given sample, an automated system to purify complex field samples has been created for the analytical techniques of electrospray ionization/ mass spectroscopy (ESI/MS), capillary electrophoresis (CE), and biological assays where unique identification requires at least some processing of complex samples. This development allows for automated sample preparation in the laboratory and analysis of complex samples in the field with multiple types of analytical instruments. Rather than using tedious, exacting protocols for desalting samples by hand, this innovation, called the Automated Sample Processing System (ASPS), takes analytes that have been extracted through high-temperature solvent extraction and introduces them into the desalting column. After 20 minutes, the eluent is produced. This clear liquid can then be directly analyzed by the techniques listed above. The current apparatus including the computer and power supplies is sturdy, has an approximate mass of 10 kg, and a volume of about 20 20 20 cm, and is undergoing further miniaturization. This system currently targets amino acids. For these molecules, a slurry of 1 g cation exchange resin in deionized water is packed into a column of the apparatus. Initial generation of the resin is done by flowing sequentially 2.3 bed volumes of 2N NaOH and 2N HCl (1 mL each) to rinse the resin, followed by .5 mL of deionized water. This makes the pH of the resin near neutral, and eliminates cross sample contamination. Afterward, 2.3 mL of extracted sample is then loaded into the column onto the top of the resin bed. Because the column is packed tightly, the sample can be applied without disturbing the resin bed. This is a vital step needed to ensure that the analytes adhere to the resin. After the sample is drained, oxalic acid (1 mL, pH 1.6-1.8, adjusted with NH4OH) is pumped into the column. Oxalic acid works as a

  7. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  8. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  9. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  10. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  11. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  12. Automated Analysis Workstation

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Information from NASA Tech Briefs of work done at Langley Research Center and the Jet Propulsion Laboratory assisted DiaSys Corporation in manufacturing their first product, the R/S 2000. Since then, the R/S 2000 and R/S 2003 have followed. Recently, DiaSys released their fourth workstation, the FE-2, which automates the process of making and manipulating wet-mount preparation of fecal concentrates. The time needed to read the sample is decreased, permitting technologists to rapidly spot parasites, ova and cysts, sometimes carried in the lower intestinal tract of humans and animals. Employing the FE-2 is non-invasive, can be performed on an out-patient basis, and quickly provides confirmatory results.

  13. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  14. Protein fabrication automation

    PubMed Central

    Cox, J. Colin; Lape, Janel; Sayed, Mahmood A.; Hellinga, Homme W.

    2007-01-01

    Facile “writing” of DNA fragments that encode entire gene sequences potentially has widespread applications in biological analysis and engineering. Rapid writing of open reading frames (ORFs) for expressed proteins could transform protein engineering and production for protein design, synthetic biology, and structural analysis. Here we present a process, protein fabrication automation (PFA), which facilitates the rapid de novo construction of any desired ORF from oligonucleotides with low effort, high speed, and little human interaction. PFA comprises software for sequence design, data management, and the generation of instruction sets for liquid-handling robotics, a liquid-handling robot, a robust PCR scheme for gene assembly from synthetic oligonucleotides, and a genetic selection system to enrich correctly assembled full-length synthetic ORFs. The process is robust and scalable. PMID:17242375

  15. Automated Defect Classification (ADC)

    1998-01-01

    The ADC Software System is designed to provide semiconductor defect feature analysis and defect classification capabilities. Defect classification is an important software method used by semiconductor wafer manufacturers to automate the analysis of defect data collected by a wide range of microscopy techniques in semiconductor wafer manufacturing today. These microscopies (e.g., optical bright and dark field, scanning electron microscopy, atomic force microscopy, etc.) generate images of anomalies that are induced or otherwise appear on wafermore » surfaces as a result of errant manufacturing processes or simple atmospheric contamination (e.g., airborne particles). This software provides methods for analyzing these images, extracting statistical features from the anomalous regions, and applying supervised classifiers to label the anomalies into user-defined categories.« less

  16. Health care automation companies.

    PubMed

    1995-12-01

    Health care automation companies: card transaction processing/EFT/EDI-capable banks; claims auditing/analysis; claims processors/clearinghouses; coding products/services; computer hardware; computer networking/LAN/WAN; consultants; data processing/outsourcing; digital dictation/transcription; document imaging/optical disk storage; executive information systems; health information networks; hospital/health care information systems; interface engines; laboratory information systems; managed care information systems; patient identification/credit cards; pharmacy information systems; POS terminals; radiology information systems; software--claims related/computer-based patient records/home health care/materials management/supply ordering/physician practice management/translation/utilization review/outcomes; telecommunications products/services; telemedicine/teleradiology; value-added networks. PMID:10153839

  17. Automated Standard Hazard Tool

    NASA Technical Reports Server (NTRS)

    Stebler, Shane

    2014-01-01

    The current system used to generate standard hazard reports is considered cumbersome and iterative. This study defines a structure for this system's process in a clear, algorithmic way so that standard hazard reports and basic hazard analysis may be completed using a centralized, web-based computer application. To accomplish this task, a test server is used to host a prototype of the tool during development. The prototype is configured to easily integrate into NASA's current server systems with minimal alteration. Additionally, the tool is easily updated and provides NASA with a system that may grow to accommodate future requirements and possibly, different applications. Results of this project's success are outlined in positive, subjective reviews complete by payload providers and NASA Safety and Mission Assurance personnel. Ideally, this prototype will increase interest in the concept of standard hazard automation and lead to the full-scale production of a user-ready application.

  18. Expedition automated flow fluorometer

    NASA Astrophysics Data System (ADS)

    Krikun, V. A.; Salyuk, P. A.

    2015-11-01

    This paper describes an apparatus and operation of automated flow-through dual-channel fluorometer for studying the fluorescence of dissolved organic matter, and the fluorescence of phytoplankton cells with open and closed reaction centers in sea areas with oligotrophic and eutrophic water type. The step-by step excitation by two semiconductor lasers or two light-emitting diodes is realized in the current device. The excitation wavelengths are 405nm and 532nm in the default configuration. Excitation radiation of each light source can be changed with different durations, intensities and repetition rate. Registration of the fluorescence signal carried out by two photo-multipliers with different optical filters of 580-600 nm and 680-700 nm band pass diapasons. The configuration of excitation sources and spectral diapasons of registered radiation can be changed due to decided tasks.

  19. Automated external defibrillators (AEDs).

    PubMed

    2003-06-01

    Automated external defibrillators, or AEDs, will automatically analyze a patient's ECG and, if needed, deliver a defibrillating shock to the heart. We sometimes refer to these devices as AED-only devices or stand-alone AEDs. The basic function of AEDs is similar to that of defibrillator/monitors, but AEDs lack their advanced capabilities and generally don't allow manual defibrillation. A device that functions strictly as an AED is intended to be used by basic users only. Such devices are often referred to as public access defibrillators. In this Evaluation, we present our findings for a newly evaluated model, the Zoll AED Plus. We also summarize our findings for the previously evaluated model that is still on the market and describe other AEDs that are also available but that we haven't evaluated. We rate the models collectively for first-responder use and public access defibrillation (PAD) applications.

  20. Health care automation companies.

    PubMed

    1995-12-01

    Health care automation companies: card transaction processing/EFT/EDI-capable banks; claims auditing/analysis; claims processors/clearinghouses; coding products/services; computer hardware; computer networking/LAN/WAN; consultants; data processing/outsourcing; digital dictation/transcription; document imaging/optical disk storage; executive information systems; health information networks; hospital/health care information systems; interface engines; laboratory information systems; managed care information systems; patient identification/credit cards; pharmacy information systems; POS terminals; radiology information systems; software--claims related/computer-based patient records/home health care/materials management/supply ordering/physician practice management/translation/utilization review/outcomes; telecommunications products/services; telemedicine/teleradiology; value-added networks.

  1. [From automation to robotics].

    PubMed

    1985-01-01

    The introduction of automation into the laboratory of biology seems to be unavoidable. But at which cost, if it is necessary to purchase a new machine for every new application? Fortunately the same image processing techniques, belonging to a theoretic framework called Mathematical Morphology, may be used in visual inspection tasks, both in car industry and in the biology lab. Since the market for industrial robotics applications is much higher than the market of biomedical applications, the price of image processing devices drops, and becomes sometimes less than the price of a complete microscope equipment. The power of the image processing methods of Mathematical Morphology will be illustrated by various examples, as automatic silver grain counting in autoradiography, determination of HLA genotype, electrophoretic gels analysis, automatic screening of cervical smears... Thus several heterogeneous applications may share the same image processing device, provided there is a separate and devoted work station for each of them.

  2. Berkeley automated supernova search

    SciTech Connect

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.; Crawford, F.S.; Burns, M.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982.

  3. Automating Frame Analysis

    SciTech Connect

    Sanfilippo, Antonio P.; Franklin, Lyndsey; Tratz, Stephen C.; Danielson, Gary R.; Mileson, Nicholas D.; Riensche, Roderick M.; McGrath, Liam

    2008-04-01

    Frame Analysis has come to play an increasingly stronger role in the study of social movements in Sociology and Political Science. While significant steps have been made in providing a theory of frames and framing, a systematic characterization of the frame concept is still largely lacking and there are no rec-ognized criteria and methods that can be used to identify and marshal frame evi-dence reliably and in a time and cost effective manner. Consequently, current Frame Analysis work is still too reliant on manual annotation and subjective inter-pretation. The goal of this paper is to present an approach to the representation, acquisition and analysis of frame evidence which leverages Content Analysis, In-formation Extraction and Semantic Search methods to provide a systematic treat-ment of a Frame Analysis and automate frame annotation.

  4. Protein fabrication automation.

    PubMed

    Cox, J Colin; Lape, Janel; Sayed, Mahmood A; Hellinga, Homme W

    2007-03-01

    Facile "writing" of DNA fragments that encode entire gene sequences potentially has widespread applications in biological analysis and engineering. Rapid writing of open reading frames (ORFs) for expressed proteins could transform protein engineering and production for protein design, synthetic biology, and structural analysis. Here we present a process, protein fabrication automation (PFA), which facilitates the rapid de novo construction of any desired ORF from oligonucleotides with low effort, high speed, and little human interaction. PFA comprises software for sequence design, data management, and the generation of instruction sets for liquid-handling robotics, a liquid-handling robot, a robust PCR scheme for gene assembly from synthetic oligonucleotides, and a genetic selection system to enrich correctly assembled full-length synthetic ORFs. The process is robust and scalable.

  5. Automated calorimeter testing system

    SciTech Connect

    Rodenburg, W.W.; James, S.J.

    1990-01-01

    The Automated Calorimeter Testing System (ACTS) is a portable measurement device that provides an independent measurement of all critical parameters of a calorimeter system. The ACTS was developed to improve productivity and performance of Mound-produced calorimeters. With ACTS, an individual with minimal understanding of calorimetry operation can perform a consistent set of diagnostic measurements on the system. The operator can identify components whose performance has deteriorated by a simple visual comparison of the current data plots with previous measurements made when the system was performing properly. Thus, downtime and out of control'' situations can be reduced. Should a system malfunction occur, a flowchart of troubleshooting procedures has been developed to facilitate quick identification of the malfunctioning component. If diagnosis is beyond the capability of the operator, the ACTS provides a consistent set of test data for review by a knowledgeable expert. The first field test was conducted at the Westinghouse Savannah River Site in early 1990. 6 figs.

  6. Automated attendance accounting system

    NASA Technical Reports Server (NTRS)

    Chapman, C. P. (Inventor)

    1973-01-01

    An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.

  7. Automated Defect Classification (ADC)

    SciTech Connect

    1998-01-01

    The ADC Software System is designed to provide semiconductor defect feature analysis and defect classification capabilities. Defect classification is an important software method used by semiconductor wafer manufacturers to automate the analysis of defect data collected by a wide range of microscopy techniques in semiconductor wafer manufacturing today. These microscopies (e.g., optical bright and dark field, scanning electron microscopy, atomic force microscopy, etc.) generate images of anomalies that are induced or otherwise appear on wafer surfaces as a result of errant manufacturing processes or simple atmospheric contamination (e.g., airborne particles). This software provides methods for analyzing these images, extracting statistical features from the anomalous regions, and applying supervised classifiers to label the anomalies into user-defined categories.

  8. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  9. Automated imatinib immunoassay

    PubMed Central

    Beumer, Jan H.; Kozo, Daniel; Harney, Rebecca L.; Baldasano, Caitlin N.; Jarrah, Justin; Christner, Susan M.; Parise, Robert; Baburina, Irina; Courtney, Jodi B.; Salamone, Salvatore J.

    2014-01-01

    Background Imatinib pharmacokinetic variability and the relationship of trough concentrations with clinical outcomes have been extensively reported. Though physical methods to quantitate imatinib exist, they are not widely available for routine use. An automated homogenous immunoassay for imatinib has been developed, facilitating routine imatinib testing. Methods Imatinib-selective monoclonal antibodies, without substantial cross-reactivity to the N-desmethyl metabolite or N-desmethyl conjugates, were produced. The antibodies were conjugated to 200 nm particles to develop immunoassay reagents on the Beckman Coulter AU480™ analyzer. These reagents were analytically validated using Clinical Laboratory Standards Institute protocols. Method comparison to LC-MS/MS was conducted using 77 plasma samples collected from subjects receiving imatinib. Results The assay requires 4 µL of sample without pre-treatment. The non-linear calibration curve ranges from 0 to 3,000 ng/mL. With automated sample dilution, concentrations of up to 9,000 ng/mL can be quantitated. The AU480 produces the first result in 10 minutes, and up to 400 tests per hour. Repeatability ranged from 2.0 to 6.0% coefficient of variation (CV), and within-laboratory reproducibility ranged from 2.9 to 7.4% CV. Standard curve stability was two weeks and on-board reagent stability was 6 weeks. For clinical samples with imatinib concentrations from 438 – 2,691 ng/mL, method comparison with LC-MS/MS gave a slope of 0.995 with a y-intercept of 24.3 and a correlation coefficient of 0.978. Conclusion The immunoassay is suitable for quantitating imatinib in human plasma, demonstrating good correlation with a physical method. Testing for optimal imatinib exposure can now be performed on routine clinical analyzers. PMID:25551407

  10. Automated bacteriuria screening using the Berthold LB 950 luminescence analyser.

    PubMed

    Curtis, G D; Johnston, H H; Hack, A R

    1987-06-01

    The Berthold LB950 Automatic Luminescence Analyser was used to estimate bacterial adenosine triphosphate in urine. The system provided a rapid (15 min) and fully automated screening test for bacteriuria at the 10(5) CFU/ml level. Bioluminescence results for 1040 urines were compared with viable counts using two reference culture methods and frequency distributions of bacterial counts and adenosine triphosphate levels were calculated. With a specificity of 79% the automated method showed a sensitivity of 84% using a pour plate reference count and 91% using a standard loop reference count. When contaminated urines were excluded the sensitivity improved to 98%. The automated bioluminescence test, though expensive, was shown to work well with good quality specimens.

  11. The impact of parallel chemistry in drug discovery.

    PubMed

    Edwards, Paul J

    2006-05-01

    With the application of parallel synthesis of single compounds to drug-discovery efforts, improvements in the efficiency of synthesis are possible. However, for improvements to occur in effective drug design - a critical requirement to increase productivity in the modern pharmaceutical industry - the implementation of in silico design hypotheses that incorporate comprehensive information on a target, including considerations of absorption, distribution, metabolism and excretion, is also necessary. Concomitantly, the use of automated methods of synthesis and purification is also required to improve drug design. Combining all of these elements allows the possibility to uncover unique insights into a biological target quickly and to therefore accelerate the rate of drug discovery.

  12. Application of parallel distributed processing to space based systems

    NASA Technical Reports Server (NTRS)

    Macdonald, J. R.; Heffelfinger, H. L.

    1987-01-01

    The concept of using Parallel Distributed Processing (PDP) to enhance automated experiment monitoring and control is explored. Recent very large scale integration (VLSI) advances have made such applications an achievable goal. The PDP machine has demonstrated the ability to automatically organize stored information, handle unfamiliar and contradictory input data and perform the actions necessary. The PDP machine has demonstrated that it can perform inference and knowledge operations with greater speed and flexibility and at lower cost than traditional architectures. In applications where the rule set governing an expert system's decisions is difficult to formulate, PDP can be used to extract rules by associating the information an expert receives with the actions taken.

  13. Parallel performance of a preconditioned CG solver for unstructured finite element applications

    SciTech Connect

    Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K.

    1994-12-31

    A parallel unstructured finite element (FE) implementation designed for message passing MIMD machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.

  14. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  15. Masking apertures enabling automation and solution exchange in sessile droplet lipid bilayers.

    PubMed

    Portonovo, Shiva A; Schmidt, Jacob

    2012-02-01

    Reconstitution of ion channels and transmembrane proteins in planar lipid bilayer membranes allow for their scientific study in highly controlled environments. Recent work with lipid bilayers formed from mechanically joined monolayers has shown their potential for wider technological application, including automation and parallelization. However, bilayer areas are highly sensitive to variations in mechanical position and the bilayers themselves cannot withstand significant perfusion of adjacent solutions. Toward this end, here we describe use of an aperture that masks the monolayer contact area, enabling formation of highly consistent bilayer areas and significantly reducing their variation with changes in relative position of the monolayers. Further, use of the aperture enables flow of solution adjacent to the bilayer without rupture or significant change in bilayer area. The device design is scalable and compatible with SBS standard instrumentation and automation technology, potentially enabling its use for rapid, parallel automated measurements of ion channels for large scale scientific studies and pharmaceutical screening.

  16. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  17. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  18. Automated Fluid Interface System (AFIS)

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Automated remote fluid servicing will be necessary for future space missions, as future satellites will be designed for on-orbit consumable replenishment. In order to develop an on-orbit remote servicing capability, a standard interface between a tanker and the receiving satellite is needed. The objective of the Automated Fluid Interface System (AFIS) program is to design, fabricate, and functionally demonstrate compliance with all design requirements for an automated fluid interface system. A description and documentation of the Fairchild AFIS design is provided.

  19. A Concept for Airborne Precision Spacing for Dependent Parallel Approaches

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Baxley, Brian T.; Abbott, Terence S.; Capron, William R.; Smith, Colin L.; Shay, Richard F.; Hubbs, Clay

    2012-01-01

    The Airborne Precision Spacing concept of operations has been previously developed to support the precise delivery of aircraft landing successively on the same runway. The high-precision and consistent delivery of inter-aircraft spacing allows for increased runway throughput and the use of energy-efficient arrivals routes such as Continuous Descent Arrivals and Optimized Profile Descents. This paper describes an extension to the Airborne Precision Spacing concept to enable dependent parallel approach operations where the spacing aircraft must manage their in-trail spacing from a leading aircraft on approach to the same runway and spacing from an aircraft on approach to a parallel runway. Functionality for supporting automation is discussed as well as procedures for pilots and controllers. An analysis is performed to identify the required information and a new ADS-B report is proposed to support these information needs. Finally, several scenarios are described in detail.

  20. Clarity: an open-source manager for laboratory automation.

    PubMed

    Delaney, Nigel F; Rojas Echenique, José I; Marx, Christopher J

    2013-04-01

    Software to manage automated laboratories, when interfaced with hardware instruments, gives users a way to specify experimental protocols and schedule activities to avoid hardware conflicts. In addition to these basics, modern laboratories need software that can run multiple different protocols in parallel and that can be easily extended to interface with a constantly growing diversity of techniques and instruments. We present Clarity, a laboratory automation manager that is hardware agnostic, portable, extensible, and open source. Clarity provides critical features including remote monitoring, robust error reporting by phone or email, and full state recovery in the event of a system crash. We discuss the basic organization of Clarity, demonstrate an example of its implementation for the automated analysis of bacterial growth, and describe how the program can be extended to manage new hardware. Clarity is mature, well documented, actively developed, written in C# for the Common Language Infrastructure, and is free and open-source software. These advantages set Clarity apart from currently available laboratory automation programs. The source code and documentation for Clarity is available at http://code.google.com/p/osla/.

  1. Automated Engineering Design (AED); An approach to automated documentation

    NASA Technical Reports Server (NTRS)

    Mcclure, C. W.

    1970-01-01

    The automated engineering design (AED) is reviewed, consisting of a high level systems programming language, a series of modular precoded subroutines, and a set of powerful software machine tools that effectively automate the production and design of new languages. AED is used primarily for development of problem and user-oriented languages. Software production phases are diagramed, and factors which inhibit effective documentation are evaluated.

  2. An automated microfluidic platform for C. elegans embryo arraying, phenotyping, and long-term live imaging

    NASA Astrophysics Data System (ADS)

    Cornaglia, Matteo; Mouchiroud, Laurent; Marette, Alexis; Narasimhan, Shreya; Lehnert, Thomas; Jovaisaite, Virginija; Auwerx, Johan; Gijs, Martin A. M.

    2015-05-01

    Studies of the real-time dynamics of embryonic development require a gentle embryo handling method, the possibility of long-term live imaging during the complete embryogenesis, as well as of parallelization providing a population’s statistics, while keeping single embryo resolution. We describe an automated approach that fully accomplishes these requirements for embryos of Caenorhabditis elegans, one of the most employed model organisms in biomedical research. We developed a microfluidic platform which makes use of pure passive hydrodynamics to run on-chip worm cultures, from which we obtain synchronized embryo populations, and to immobilize these embryos in incubator microarrays for long-term high-resolution optical imaging. We successfully employ our platform to investigate morphogenesis and mitochondrial biogenesis during the full embryonic development and elucidate the role of the mitochondrial unfolded protein response (UPRmt) within C. elegans embryogenesis. Our method can be generally used for protein expression and developmental studies at the embryonic level, but can also provide clues to understand the aging process and age-related diseases in particular.

  3. An automated microfluidic platform for C. elegans embryo arraying, phenotyping, and long-term live imaging

    PubMed Central

    Cornaglia, Matteo; Mouchiroud, Laurent; Marette, Alexis; Narasimhan, Shreya; Lehnert, Thomas; Jovaisaite, Virginija; Auwerx, Johan; Gijs, Martin A. M.

    2015-01-01

    Studies of the real-time dynamics of embryonic development require a gentle embryo handling method, the possibility of long-term live imaging during the complete embryogenesis, as well as of parallelization providing a population’s statistics, while keeping single embryo resolution. We describe an automated approach that fully accomplishes these requirements for embryos of Caenorhabditis elegans, one of the most employed model organisms in biomedical research. We developed a microfluidic platform which makes use of pure passive hydrodynamics to run on-chip worm cultures, from which we obtain synchronized embryo populations, and to immobilize these embryos in incubator microarrays for long-term high-resolution optical imaging. We successfully employ our platform to investigate morphogenesis and mitochondrial biogenesis during the full embryonic development and elucidate the role of the mitochondrial unfolded protein response (UPRmt) within C. elegans embryogenesis. Our method can be generally used for protein expression and developmental studies at the embryonic level, but can also provide clues to understand the aging process and age-related diseases in particular. PMID:25950235

  4. Fuzzy Control/Space Station automation

    NASA Technical Reports Server (NTRS)

    Gersh, Mark

    1990-01-01

    Viewgraphs on fuzzy control/space station automation are presented. Topics covered include: Space Station Freedom (SSF); SSF evolution; factors pointing to automation & robotics (A&R); astronaut office inputs concerning A&R; flight system automation and ground operations applications; transition definition program; and advanced automation software tools.

  5. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Automated vessels. 15.715 Section 15.715 Shipping COAST... Limitations and Qualifying Factors § 15.715 Automated vessels. (a) Coast Guard acceptance of automated systems... automated system in establishing initial manning levels; however, until the system is proven reliable,...

  6. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Automated vessels. 15.715 Section 15.715 Shipping COAST... Limitations and Qualifying Factors § 15.715 Automated vessels. (a) Coast Guard acceptance of automated systems... automated system in establishing initial manning levels; however, until the system is proven reliable,...

  7. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Automated vessels. 15.715 Section 15.715 Shipping COAST... Limitations and Qualifying Factors § 15.715 Automated vessels. (a) Coast Guard acceptance of automated systems... automated system in establishing initial manning levels; however, until the system is proven reliable,...

  8. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Automated vessels. 15.715 Section 15.715 Shipping COAST... Limitations and Qualifying Factors § 15.715 Automated vessels. (a) Coast Guard acceptance of automated systems... automated system in establishing initial manning levels; however, until the system is proven reliable,...

  9. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Automated vessels. 15.715 Section 15.715 Shipping COAST... Limitations and Qualifying Factors § 15.715 Automated vessels. (a) Coast Guard acceptance of automated systems... automated system in establishing initial manning levels; however, until the system is proven reliable,...

  10. Human factors in cockpit automation

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1984-01-01

    The rapid advance in microprocessor technology has made it possible to automate many functions that were previously performed manually. Several research areas have been identified which are basic to the question of the implementation of automation in the cockpit. One of the identified areas deserving further research is warning and alerting systems. Modern transport aircraft have had one after another warning and alerting systems added, and computer-based cockpit systems make it possible to add even more. Three major areas of concern are: input methods (including voice, keyboard, touch panel, etc.), output methods and displays (from traditional instruments to CRTs, to exotic displays including the human voice), and training for automation. Training for operating highly automatic systems requires considerably more attention than it has been given in the past. Training methods have not kept pace with the advent of flight-deck automation.

  11. Automating the Purple Crow Lidar

    NASA Astrophysics Data System (ADS)

    Hicks, Shannon; Sica, R. J.; Argall, P. S.

    2016-06-01

    The Purple Crow LiDAR (PCL) was built to measure short and long term coupling between the lower, middle, and upper atmosphere. The initial component of my MSc. project is to automate two key elements of the PCL: the rotating liquid mercury mirror and the Zaber alignment mirror. In addition to the automation of the Zaber alignment mirror, it is also necessary to describe the mirror's movement and positioning errors. Its properties will then be added into the alignment software. Once the alignment software has been completed, we will compare the new alignment method with the previous manual procedure. This is the first among several projects that will culminate in a fully-automated lidar. Eventually, we will be able to work remotely, thereby increasing the amount of data we collect. This paper will describe the motivation for automation, the methods we propose, preliminary results for the Zaber alignment error analysis, and future work.

  12. Real Automation in the Field

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Mayero, Micaela; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We provide a package of strategies for automation of non-linear arithmetic in PVS. In particular, we describe a simplication procedure for the field of real numbers and a strategy for cancellation of common terms.

  13. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  14. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  15. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization.

  16. An automation simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.; Mutammara, Atheel

    1988-01-01

    The work being done in porting ROBOSIM (a graphical simulation system developed jointly by NASA-MSFC and Vanderbilt University) to the HP350SRX graphics workstation is described. New additional ROBOSIM features, like collision detection and new kinematics simulation methods are also discussed. Based on the experiences of the work on ROBOSIM, a new graphics structural modeling environment is suggested which is intended to be a part of a new knowledge-based multiple aspect modeling testbed. The knowledge-based modeling methodologies and tools already available are described. Three case studies in the area of Space Station automation are also reported. First a geometrical structural model of the station is presented. This model was developed using the ROBOSIM package. Next the possible application areas of an integrated modeling environment in the testing of different Space Station operations are discussed. One of these possible application areas is the modeling of the Environmental Control and Life Support System (ECLSS), which is one of the most complex subsystems of the station. Using the multiple aspect modeling methodology, a fault propagation model of this system is being built and is described.

  17. Automated Supernova Discovery (Abstract)

    NASA Astrophysics Data System (ADS)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  18. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. PMID:27034378

  19. Automated Gas Distribution System

    NASA Astrophysics Data System (ADS)

    Starke, Allen; Clark, Henry

    2012-10-01

    The cyclotron of Texas A&M University is one of the few and prized cyclotrons in the country. Behind the scenes of the cyclotron is a confusing, and dangerous setup of the ion sources that supplies the cyclotron with particles for acceleration. To use this machine there is a time consuming, and even wasteful step by step process of switching gases, purging, and other important features that must be done manually to keep the system functioning properly, while also trying to maintain the safety of the working environment. Developing a new gas distribution system to the ion source prevents many of the problems generated by the older manually setup process. This developed system can be controlled manually in an easier fashion than before, but like most of the technology and machines in the cyclotron now, is mainly operated based on software programming developed through graphical coding environment Labview. The automated gas distribution system provides multi-ports for a selection of different gases to decrease the amount of gas wasted through switching gases, and a port for the vacuum to decrease the amount of time spent purging the manifold. The Labview software makes the operation of the cyclotron and ion sources easier, and safer for anyone to use.

  20. Automated call tracking systems

    SciTech Connect

    Hardesty, C.

    1993-03-01

    User Services groups are on the front line with user support. We are the first to hear about problems. The speed, accuracy, and intelligence with which we respond determines the user`s perception of our effectiveness and our commitment to quality and service. To keep pace with the complex changes at our sites, we must have tools to help build a knowledge base of solutions, a history base of our users, and a record of every problem encountered. Recently, I completed a survey of twenty sites similar to the National Energy Research Supercomputer Center (NERSC). This informal survey reveals that 27% of the sites use a paper system to log calls, 60% employ homegrown automated call tracking systems, and 13% use a vendor-supplied system. Fifty-four percent of those using homegrown systems are exploring the merits of switching to a vendor-supplied system. The purpose of this paper is to provide guidelines for evaluating a call tracking system. In addition, insights are provided to assist User Services groups in selecting a system that fits their needs.

  1. Automated Microbial Metabolism Laboratory

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Automated Microbial Metabolism Laboratory (AMML) 1971-1972 program involved the investigation of three separate life detection schemes. The first was a continued further development of the labeled release experiment. The possibility of chamber reuse without inbetween sterilization, to provide comparative biochemical information was tested. Findings show that individual substrates or concentrations of antimetabolites may be sequentially added to a single test chamber. The second detection system which was investigated for possible inclusion in the AMML package of assays, was nitrogen fixation as detected by acetylene reduction. Thirdly, a series of preliminary steps were taken to investigate the feasibility of detecting biopolymers in soil. A strategy for the safe return to Earth of a Mars sample prior to manned landings on Mars is outlined. The program assumes that the probability of indigenous life on Mars is unity and then broadly presents the procedures for acquisition and analysis of the Mars sample in a manner to satisfy the scientific community and the public that adequate safeguards are being taken.

  2. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  3. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  4. Technology modernization assessment flexible automation

    SciTech Connect

    Bennett, D.W.; Boyd, D.R.; Hansen, N.H.; Hansen, M.A.; Yount, J.A.

    1990-12-01

    The objectives of this report are: to present technology assessment guidelines to be considered in conjunction with defense regulations before an automation project is developed to give examples showing how assessment guidelines may be applied to a current project to present several potential areas where automation might be applied successfully in the depot system. Depots perform primarily repair and remanufacturing operations, with limited small batch manufacturing runs. While certain activities (such as Management Information Systems and warehousing) are directly applicable to either environment, the majority of applications will require combining existing and emerging technologies in different ways, with the special needs of depot remanufacturing environment. Industry generally enjoys the ability to make revisions to its product lines seasonally, followed by batch runs of thousands or more. Depot batch runs are in the tens, at best the hundreds, of parts with a potential for large variation in product mix; reconfiguration may be required on a week-to-week basis. This need for a higher degree of flexibility suggests a higher level of operator interaction, and, in turn, control systems that go beyond the state of the art for less flexible automation and industry in general. This report investigates the benefits and barriers to automation and concludes that, while significant benefits do exist for automation, depots must be prepared to carefully investigate the technical feasibility of each opportunity and the life-cycle costs associated with implementation. Implementation is suggested in two ways: (1) develop an implementation plan for automation technologies based on results of small demonstration automation projects; (2) use phased implementation for both these and later stage automation projects to allow major technical and administrative risk issues to be addressed. 10 refs., 2 figs., 2 tabs. (JF)

  5. Automated Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Thomason, Cindy; Anderson, Paul M.; Martin, James A.

    1990-01-01

    Automated power-distribution system monitors and controls electrical power to modules in network. Handles both 208-V, 20-kHz single-phase alternating current and 120- to 150-V direct current. Power distributed to load modules from power-distribution control units (PDCU's) via subsystem distributors. Ring busses carry power to PDCU's from power source. Needs minimal attention. Detects faults and also protects against them. Potential applications include autonomous land vehicles and automated industrial process systems.

  6. Evolution paths for advanced automation

    NASA Technical Reports Server (NTRS)

    Healey, Kathleen J.

    1990-01-01

    As Space Station Freedom (SSF) evolves, increased automation and autonomy will be required to meet Space Station Freedom Program (SSFP) objectives. As a precursor to the use of advanced automation within the SSFP, especially if it is to be used on SSF (e.g., to automate the operation of the flight systems), the underlying technologies will need to be elevated to a high level of readiness to ensure safe and effective operations. Ground facilities supporting the development of these flight systems -- from research and development laboratories through formal hardware and software development environments -- will be responsible for achieving these levels of technology readiness. These facilities will need to evolve support the general evolution of the SSFP. This evolution will include support for increasing the use of advanced automation. The SSF Advanced Development Program has funded a study to define evolution paths for advanced automaton within the SSFP's ground-based facilities which will enable, promote, and accelerate the appropriate use of advanced automation on-board SSF. The current capability of the test beds and facilities, such as the Software Support Environment, with regard to advanced automation, has been assessed and their desired evolutionary capabilities have been defined. Plans and guidelines for achieving this necessary capability have been constructed. The approach taken has combined indepth interviews of test beds personnel at all SSF Work Package centers with awareness of relevant state-of-the-art technology and technology insertion methodologies. Key recommendations from the study include advocating a NASA-wide task force for advanced automation, and the creation of software prototype transition environments to facilitate the incorporation of advanced automation in the SSFP.

  7. Experimental Parallel-Processing Computer

    NASA Technical Reports Server (NTRS)

    Mcgregor, J. W.; Salama, M. A.

    1986-01-01

    Master processor supervises slave processors, each with its own memory. Computer with parallel processing serves as inexpensive tool for experimentation with parallel mathematical algorithms. Speed enhancement obtained depends on both nature of problem and structure of algorithm used. In parallel-processing architecture, "bank select" and control signals determine which one, if any, of N slave processor memories accessible to master processor at any given moment. When so selected, slave memory operates as part of master computer memory. When not selected, slave memory operates independently of main memory. Slave processors communicate with each other via input/output bus.

  8. Cultural adaptation to environmental change versus stability.

    PubMed

    Chang, Lei; Chen, Bin-Bin; Lu, Hui Jing

    2013-10-01

    The target article provides an intermediate account of culture and freedom that is conceived to be curvilinear by treating economic development not as an adaptive outcome in response to climate but as a cause of culture parallel to climate. We argue that the extent of environmental variability, including climatic variability, affects cultural adaptation.

  9. Sub-Second Parallel State Estimation

    SciTech Connect

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects

  10. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  11. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  12. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  13. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  14. Metal structures with parallel pores

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  15. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  16. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  17. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  18. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  19. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  20. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  1. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  2. Method and automated apparatus for detecting coliform organisms

    NASA Technical Reports Server (NTRS)

    Dill, W. P.; Taylor, R. E.; Jeffers, E. L. (Inventor)

    1980-01-01

    Method and automated apparatus are disclosed for determining the time of detection of metabolically produced hydrogen by coliform bacteria cultured in an electroanalytical cell from the time the cell is inoculated with the bacteria. The detection time data provides bacteria concentration values. The apparatus is sequenced and controlled by a digital computer to discharge a spent sample, clean and sterilize the culture cell, provide a bacteria nutrient into the cell, control the temperature of the nutrient, inoculate the nutrient with a bacteria sample, measures the electrical potential difference produced by the cell, and measures the time of detection from inoculation.

  3. A Droplet Microfluidic Platform for Automating Genetic Engineering.

    PubMed

    Gach, Philip C; Shih, Steve C C; Sustarich, Jess; Keasling, Jay D; Hillson, Nathan J; Adams, Paul D; Singh, Anup K

    2016-05-20

    We present a water-in-oil droplet microfluidic platform for transformation, culture and expression of recombinant proteins in multiple host organisms including bacteria, yeast and fungi. The platform consists of a hybrid digital microfluidic/channel-based droplet chip with integrated temperature control to allow complete automation and integration of plasmid addition, heat-shock transformation, addition of selection medium, culture, and protein expression. The microfluidic format permitted significant reduction in consumption (100-fold) of expensive reagents such as DNA and enzymes compared to the benchtop method. The chip contains a channel to continuously replenish oil to the culture chamber to provide a fresh supply of oxygen to the cells for long-term (∼5 days) cell culture. The flow channel also replenished oil lost to evaporation and increased the number of droplets that could be processed and cultured. The platform was validated by transforming several plasmids into Escherichia coli including plasmids containing genes for fluorescent proteins GFP, BFP and RFP; plasmids with selectable markers for ampicillin or kanamycin resistance; and a Golden Gate DNA assembly reaction. We also demonstrate the applicability of this platform for transformation in widely used eukaryotic organisms such as Saccharomyces cerevisiae and Aspergillus niger. Duration and temperatures of the microfluidic heat-shock procedures were optimized to yield transformation efficiencies comparable to those obtained by benchtop methods with a throughput up to 6 droplets/min. The proposed platform offers potential for automation of molecular biology experiments significantly reducing cost, time and variability while improving throughput.

  4. A Droplet Microfluidic Platform for Automating Genetic Engineering.

    PubMed

    Gach, Philip C; Shih, Steve C C; Sustarich, Jess; Keasling, Jay D; Hillson, Nathan J; Adams, Paul D; Singh, Anup K

    2016-05-20

    We present a water-in-oil droplet microfluidic platform for transformation, culture and expression of recombinant proteins in multiple host organisms including bacteria, yeast and fungi. The platform consists of a hybrid digital microfluidic/channel-based droplet chip with integrated temperature control to allow complete automation and integration of plasmid addition, heat-shock transformation, addition of selection medium, culture, and protein expression. The microfluidic format permitted significant reduction in consumption (100-fold) of expensive reagents such as DNA and enzymes compared to the benchtop method. The chip contains a channel to continuously replenish oil to the culture chamber to provide a fresh supply of oxygen to the cells for long-term (∼5 days) cell culture. The flow channel also replenished oil lost to evaporation and increased the number of droplets that could be processed and cultured. The platform was validated by transforming several plasmids into Escherichia coli including plasmids containing genes for fluorescent proteins GFP, BFP and RFP; plasmids with selectable markers for ampicillin or kanamycin resistance; and a Golden Gate DNA assembly reaction. We also demonstrate the applicability of this platform for transformation in widely used eukaryotic organisms such as Saccharomyces cerevisiae and Aspergillus niger. Duration and temperatures of the microfluidic heat-shock procedures were optimized to yield transformation efficiencies comparable to those obtained by benchtop methods with a throughput up to 6 droplets/min. The proposed platform offers potential for automation of molecular biology experiments significantly reducing cost, time and variability while improving throughput. PMID:26830031

  5. Automated security response robot

    NASA Astrophysics Data System (ADS)

    Ciccimaro, Dominic A.; Everett, Hobart R.; Gilbreath, Gary A.; Tran, Tien T.

    1999-01-01

    ROBART III is intended as an advance demonstration platform for non-lethal response measures, extending the concepts of reflexive teleoperation into the realm of coordinated weapons control in law enforcement and urban warfare scenarios. A rich mix of ultrasonic and optical proximity and range sensors facilitates remote operation in unstructured and unexplored buildings with minimal operator supervision. Autonomous navigation and mapping of interior spaces is significantly enhanced by an innovative algorithm which exploits the fact that the majority of man-made structures are characterized by parallel and orthogonal walls. Extremely robust intruder detection and assessment capabilities are achieved through intelligent fusion of a multitude of inputs form various onboard motion sensors. Intruder detection is addressed by a 360-degree staring array of passive-IR motion detectors, augmented by a number of positionable head-mounted sensors. Automatic camera tracking of a moving target is accomplished using a video line digitizer. Non-lethal response systems include a six- barrelled pneumatically-powered Gatling gun, high-powered strobe lights, and three ear-piercing 103-decibel sirens.

  6. Comparative evaluation of two fully-automated real-time PCR methods for MRSA admission screening in a tertiary-care hospital.

    PubMed

    Hos, N J; Wiegel, P; Fischer, J; Plum, G

    2016-09-01

    We evaluated two fully-automated real-time PCR systems, the novel QIAGEN artus MRSA/SA QS-RGQ and the widely used BD MAX MRSA assay, for their diagnostic performance in MRSA admission screening in a tertiary-care university hospital. Two hundred sixteen clinical swabs were analyzed for MRSA DNA using the BD MAX MRSA assay. In parallel, the same specimens were tested with the QIAGEN artus MRSA/SA QS-RGQ. Automated steps included lysis of bacteria, DNA extraction, real-time PCR and interpretation of results. MRSA culture was additionally performed as a reference method for MRSA detection. Sensitivity values were similar for both assays (80 %), while the QIAGEN artus MRSA/SA QS-RGQ reached a slightly higher specificity (95.8 % versus 90.0 %). Positive (PPVs) and negative predictive values (NPVs) were 17.4 % and 99.4 % for the BD MAX MRSA assay and 33.3 % and 99.5 % for the QIAGEN artus MRSA/SA QS-RGQ, respectively. Total turn-around time (TAT) for 24 samples was 3.5 hours for both assays. In conclusion, both assays represent reliable diagnostic tools due to their high negative predictive values, especially for the rapid identification of MRSA negative patients in a low prevalence MRSA area. PMID:27259711

  7. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  8. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  9. Automation: Decision Aid or Decision Maker?

    NASA Technical Reports Server (NTRS)

    Skitka, Linda J.

    1998-01-01

    This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency. By comparing performance on exactly the same events on the same tasks with and without an automated decision aid, we were able to determine that at least the omission error part of automation bias is due to the unique context created by having an automated decision aid, and is not a phenomena that would occur even if people were not in an automated context. However, this study also revealed that having an automated decision aid did lead to modestly improved performance across all non-error events. Participants in the non- automated condition responded with 83.68% accuracy, whereas participants in the automated condition responded with 88.67% accuracy, across all events. Automated decision aids clearly led to better overall performance when they were accurate. People performed almost exactly at the level of reliability as the automation (which across events was 88% reliable). However, also clear, is that the presence of less than 100% accurate automated decision aids creates a context in which new kinds of errors in decision making can occur. Participants in the non-automated condition responded with 97% accuracy on the six "error" events, whereas participants in the automated condition had only a 65% accuracy rate when confronted with those same six events. In short, the presence of an AMA can lead to vigilance decrements that can lead to errors in decision making.

  10. Automated protein NMR resonance assignments.

    PubMed

    Wan, Xiang; Xu, Dong; Slupsky, Carolyn M; Lin, Guohui

    2003-01-01

    NMR resonance peak assignment is one of the key steps in solving an NMR protein structure. The assignment process links resonance peaks to individual residues of the target protein sequence, providing the prerequisite for establishing intra- and inter-residue spatial relationships between atoms. The assignment process is tedious and time-consuming, which could take many weeks. Though there exist a number of computer programs to assist the assignment process, many NMR labs are still doing the assignments manually to ensure quality. This paper presents (1) a new scoring system for mapping spin systems to residues, (2) an automated adjacency information extraction procedure from NMR spectra, and (3) a very fast assignment algorithm based on our previous proposed greedy filtering method and a maximum matching algorithm to automate the assignment process. The computational tests on 70 instances of (pseudo) experimental NMR data of 14 proteins demonstrate that the new score scheme has much better discerning power with the aid of adjacency information between spin systems simulated across various NMR spectra. Typically, with automated extraction of adjacency information, our method achieves nearly complete assignments for most of the proteins. The experiment shows very promising perspective that the fast automated assignment algorithm together with the new score scheme and automated adjacency extraction may be ready for practical use. PMID:16452794

  11. Space power subsystem automation technology

    NASA Technical Reports Server (NTRS)

    Graves, J. R. (Compiler)

    1982-01-01

    The technology issues involved in power subsystem automation and the reasonable objectives to be sought in such a program were discussed. The complexities, uncertainties, and alternatives of power subsystem automation, along with the advantages from both an economic and a technological perspective were considered. Whereas most spacecraft power subsystems now use certain automated functions, the idea of complete autonomy for long periods of time is almost inconceivable. Thus, it seems prudent that the technology program for power subsystem automation be based upon a growth scenario which should provide a structured framework of deliberate steps to enable the evolution of space power subsystems from the current practice of limited autonomy to a greater use of automation with each step being justified on a cost/benefit basis. Each accomplishment should move toward the objectives of decreased requirement for ground control, increased system reliability through onboard management, and ultimately lower energy cost through longer life systems that require fewer resources to operate and maintain. This approach seems well-suited to the evolution of more sophisticated algorithms and eventually perhaps even the use of some sort of artificial intelligence. Multi-hundred kilowatt systems of the future will probably require an advanced level of autonomy if they are to be affordable and manageable.

  12. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  13. Parallel computation and computers for artificial intelligence

    SciTech Connect

    Kowalik, J.S. )

    1988-01-01

    This book discusses Parallel Processing in Artificial Intelligence; Parallel Computing using Multilisp; Execution of Common Lisp in a Parallel Environment; Qlisp; Restricted AND-Parallel Execution of Logic Programs; PARLOG: Parallel Programming in Logic; and Data-driven Processing of Semantic Nets. Attention is also given to: Application of the Butterfly Parallel Processor in Artificial Intelligence; On the Range of Applicability of an Artificial Intelligence Machine; Low-level Vision on Warp and the Apply Programming Mode; AHR: A Parallel Computer for Pure Lisp; FAIM-1: An Architecture for Symbolic Multi-processing; and Overview of Al Application Oriented Parallel Processing Research in Japan.

  14. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  15. Optimization of expression conditions for soluble protein by using a robotic system of multi-culture vessels.

    PubMed

    Ahn, Woo-Sung; Ahn, Ji-Young; Jung, Chan-Hun; Hwang, Kwang Yeon; Kim, Eunice Eunkyeong; Kim, Joon; Im, Hana; Kim, Jin-Oh; Yu, Myeong-Hee; Lee, Cheolju

    2007-11-01

    We have developed a robotic system for an automated parallel cell cultivation process that enables screening of induction parameters for the soluble expression of recombinant protein. The system is designed for parallelized and simultaneous cultivation of up to 24 different types of cells or a single type of cell at 24 different conditions. Twenty-four culture vessels of about 200 ml are arranged in four columns x six rows. The system is equipped with four independent thermostated waterbaths, each of which accommodates six culture vessels. A two-channel liquid handler is attached in order to distribute medium from the reservoir to the culture vessels, to transfer seed or other reagents, and to take an aliquot from the growing cells. Cells in each vessel are agitated and aerated by sparging filtered air. We tested the system by growing Escherichia coli BL21(DE3) cells harboring a plasmid for a model protein, and used it in optimizing protein expression conditions by varying the induction temperature and the inducer concentration. The results revealed the usefulness of our custom-made cell cultivation robot in screening optimal conditions for the expression of soluble proteins.

  16. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  17. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  18. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  19. Design automation for integrated circuits

    NASA Astrophysics Data System (ADS)

    Newell, S. B.; de Geus, A. J.; Rohrer, R. A.

    1983-04-01

    Consideration is given to the development status of the use of computers in automated integrated circuit design methods, which promise the minimization of both design time and design error incidence. Integrated circuit design encompasses two major tasks: error specification, in which the goal is a logic diagram that accurately represents the desired electronic function, and physical specification, in which the goal is an exact description of the physical locations of all circuit elements and their interconnections on the chip. Design automation not only saves money by reducing design and fabrication time, but also helps the community of systems and logic designers to work more innovatively. Attention is given to established design automation methodologies, programmable logic arrays, and design shortcuts.

  20. Automated power management and control

    NASA Technical Reports Server (NTRS)

    Dolce, James L.

    1991-01-01

    A comprehensive automation design is being developed for Space Station Freedom's electric power system. A joint effort between NASA's Office of Aeronautics and Exploration Technology and NASA's Office of Space Station Freedom, it strives to increase station productivity by applying expert systems and conventional algorithms to automate power system operation. The initial station operation will use ground-based dispatches to perform the necessary command and control tasks. These tasks constitute planning and decision-making activities that strive to eliminate unplanned outages. We perceive an opportunity to help these dispatchers make fast and consistent on-line decisions by automating three key tasks: failure detection and diagnosis, resource scheduling, and security analysis. Expert systems will be used for the diagnostics and for the security analysis; conventional algorithms will be used for the resource scheduling.

  1. Automated mapping of hammond's landforms

    USGS Publications Warehouse

    Gallant, A.L.; Brown, D.D.; Hoffer, R.M.

    2005-01-01

    We automated a method for mapping Hammond's landforms over large landscapes using digital elevation data. We compared our results against Hammond's published landform maps, derived using manual interpretation procedures. We found general agreement in landform patterns mapped by the manual and the automated approaches, and very close agreement in characterization of local topographic relief. The two approaches produced different interpretations of intermediate landforms, which relied upon quantification of the proportion of landscape having gently sloping terrain. This type of computation is more efficiently and consistently applied by computer than human. Today's ready access to digital data and computerized geospatial technology provides a good foundation for mapping terrain features, but the mapping criteria guiding manual techniques in the past may not be appropriate for automated approaches. We suggest that future efforts center on the advantages offered by digital advancements in refining an approach to better characterize complex landforms. ?? 2005 IEEE.

  2. Automated gaseous criteria pollutant audits

    SciTech Connect

    Watson, J.P.

    1998-12-31

    The Quality Assurance Section (QAS) of the California Air Resources Board (CARB) began performing automated gaseous audits of its ambient air monitoring sites in July 1996. The concept of automated audits evolved from the constant streamlining of the through-the-probe audit process. Continual audit van development and the desire to utilize advanced technology to save time and improve the accuracy of the overall audit process also contributed to the concept. The automated audit process is a computer program which controls an audit van`s ambient gas calibration system, isolated relay and analog to digital cards, and a monitoring station`s data logging system. The program instructs the audit van`s gas calibration system to deliver specified audit concentrations to a monitoring station`s instruments through their collection probe inlet. The monitoring station`s responses to the audit concentrations are obtained by the program polling the station`s datalogger through its RS-232 port. The program calculates relevant audit statistics and stores all data collected during an audit in a relational database. Planning for the development of an automated gaseous audit system began in earnest in 1993, when the CARB purchased computerized ambient air calibration systems which could be remotely controlled by computer through their serial ports. After receiving all the required components of the automated audit system, they were individually tested to confirm their correct operation. Subsequently, a prototype program was developed to perform through-the-probe automated ozone audits. Numerous simulated ozone audits documented the program`s ability to control audit equipment and extract data from a monitoring station`s data logging system. The program was later modified to incorporate the capability to perform audits for carbon monoxide, total hydrocarbons, methane, nitrogen dioxide, sulfur dioxide, and hydrogen sulfide.

  3. BOA: Framework for automated builds

    SciTech Connect

    N. Ratnikova et al.

    2003-09-30

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.

  4. Advanced automation for space missions

    SciTech Connect

    Freitas, R.A., Jr.; Healy, T.J.; Long, J.E.

    1982-01-01

    A NASA/ASEE summer study conducted at the University of Santa Clara in 1980 examined the feasibility of using advanced artificial intelligence and automation technologies in future NASA space missions. Four candidate applications missions were considered: an intelligent earth-sensing information system; an autonomous space exploration system; an automated space manufacturing facility; and a self-replicating, growing lunar factory. The study assessed the various artificial intelligence and machine technologies which must be developed if such sophisticated missions are to become feasible by the century's end. 18 references.

  5. Automating Shallow Seismic Imaging

    SciTech Connect

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy-access environments, this device could

  6. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  7. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  8. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  9. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  10. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  11. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  12. FLIC: A translator for same-source parallel implementation of regular grid applications

    SciTech Connect

    Michalakes, J.

    1997-02-01

    FLIC, a Fortran loop and index converter, is a parser-based source translation tool that automates the conversion of program loops and array indices for distributed-memory parallel computers. This conversion is important in the implementation of gridded models on distributed memory because it allows for decomposition and shrinking of model data structures. FLIC does not provide the parallel services itself, but rather provides an automated and transparent mapping of the source code to calls or directives of the user`s choice of run-time systems or parallel libraries. The amount of user-supplied input required by FLIC to direct the conversion is small enough to fit as command line arguments for the tool. The tool requires no additional statements, comments, or directives in the source code, thus avoiding the pervasiveness and intrusiveness imposed by directives-based preprocessors and parallelizing compilers. FLIC is lightweight and suitable for use as a precompiler and facilitates a same-source approach to operability on diverse computer architectures. FLIC is targeted to new or existing applications that employ regular gridded domains, such as weather models, that will be parallelized by data-domain decomposition.

  13. Automated Tools for Subject Matter Expert Evaluation of Automated Scoring

    ERIC Educational Resources Information Center

    Williamson, David M.; Bejar, Isaac I.; Sax, Anne

    2004-01-01

    As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

  14. Automation U.S.A.: Overcoming Barriers to Automation.

    ERIC Educational Resources Information Center

    Brody, Herb

    1985-01-01

    Although labor unions and inadequate technology play minor roles, the principal barrier to factory automation is "fear of change." Related problems include long-term benefits, nontechnical executives, and uncertainty of factory cost accounting. Industry support for university programs is helping to educate engineers to design, implement, and…

  15. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  16. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  17. An automated nudged elastic band method.

    PubMed

    Kolsbjerg, Esben L; Groves, Michael N; Hammer, Bjørk

    2016-09-01

    A robust, efficient, dynamic, and automated nudged elastic band (AutoNEB) algorithm to effectively locate transition states is presented. The strength of the algorithm is its ability to use fewer resources than the nudged elastic band (NEB) method by focusing first on converging a rough path before improving upon the resolution around the transition state. To demonstrate its efficiency, it has been benchmarked using a simple diffusion problem and a dehydrogenation reaction. In both cases, the total number of force evaluations used by the AutoNEB method is significantly less than the NEB method. Furthermore, it is shown that for a fast and robust relaxation to the transition state, a climbing image elastic band method where the full spring force, rather than only the component parallel to the local tangent to the path, is preferred especially for pathways through energy landscapes with multiple local minima. The resulting corner cutting does not affect the accuracy of the transition state as long as this is located with the climbing image method. Finally, a number of pitfalls often encountered while locating the true transition state of a reaction are discussed in terms of systematically exploring the multidimensional energy landscape of a given process. PMID:27608989

  18. An automated nudged elastic band method

    NASA Astrophysics Data System (ADS)

    Kolsbjerg, Esben L.; Groves, Michael N.; Hammer, Bjørk

    2016-09-01

    A robust, efficient, dynamic, and automated nudged elastic band (AutoNEB) algorithm to effectively locate transition states is presented. The strength of the algorithm is its ability to use fewer resources than the nudged elastic band (NEB) method by focusing first on converging a rough path before improving upon the resolution around the transition state. To demonstrate its efficiency, it has been benchmarked using a simple diffusion problem and a dehydrogenation reaction. In both cases, the total number of force evaluations used by the AutoNEB method is significantly less than the NEB method. Furthermore, it is shown that for a fast and robust relaxation to the transition state, a climbing image elastic band method where the full spring force, rather than only the component parallel to the local tangent to the path, is preferred especially for pathways through energy landscapes with multiple local minima. The resulting corner cutting does not affect the accuracy of the transition state as long as this is located with the climbing image method. Finally, a number of pitfalls often encountered while locating the true transition state of a reaction are discussed in terms of systematically exploring the multidimensional energy landscape of a given process.

  19. The array biosensor: portable, automated systems.

    PubMed

    Ligler, Frances S; Sapsford, Kim E; Golden, Joel P; Shriver-Lake, Lisa C; Taitt, Chris R; Dyer, Maureen A; Barone, Salvatore; Myatt, Christopher J

    2007-01-01

    With recent advances in surface chemistry, microfluidics, and data analysis, there are ever increasing reports of array-based methods for detecting and quantifying multiple targets. However, only a few systems have been described that require minimal preparation of complex samples and possess a means of quantitatively assessing matrix effects. The NRL Array Biosensor has been developed with the goal of rapid and sensitive detection of multiple targets from multiple samples analyzed simultaneously. A key characteristic of this system is its two-dimensional configuration, which allows controls and standards to be analyzed in parallel with unknowns. Although the majority of our work has focused on instrument automation and immunoassay development, we have recently initiated efforts to utilize alternative recognition molecules, such as peptides and sugars, for detection of a wider variety of targets. The array biosensor has demonstrated utility for a variety of applications, including food safety, disease diagnosis, monitoring immune response, and homeland security, and is presently being transitioned to the commercial sector for manufacturing.

  20. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.