Science.gov

Sample records for automated parallel cultures

  1. Multiple microfermentor battery: a versatile tool for use with automated parallel cultures of microorganisms producing recombinant proteins and for optimization of cultivation protocols.

    PubMed

    Frachon, Emmanuel; Bondet, Vincent; Munier-Lehmann, Hélène; Bellalou, Jacques

    2006-08-01

    A multiple microfermentor battery was designed for high-throughput recombinant protein production in Escherichia coli. This novel system comprises eight aerated glass reactors with a working volume of 80 ml and a moving external optical sensor for measuring optical densities at 600 nm (OD600) ranging from 0.05 to 100 online. Each reactor can be fitted with miniature probes to monitor temperature, dissolved oxygen (DO), and pH. Independent temperature regulation for each vessel is obtained with heating/cooling Peltier devices. Data from pH, DO, and turbidity sensors are collected on a FieldPoint (National Instruments) I/O interface and are processed and recorded by a LabVIEW program on a personal computer, which enables feedback control of the culture parameters. A high-density medium formulation was designed, which enabled us to grow E. coli to OD600 up to 100 in batch cultures with oxygen-enriched aeration. Accordingly, the biomass and the amount of recombinant protein produced in a 70-ml culture were at least equivalent to the biomass and the amount of recombinant protein obtained in a Fernbach flask with 1 liter of conventional medium. Thus, the microfermentor battery appears to be well suited for automated parallel cultures and process optimization, such as that needed for structural genomics projects.

  2. Multiple Microfermentor Battery: a Versatile Tool for Use with Automated Parallel Cultures of Microorganisms Producing Recombinant Proteins and for Optimization of Cultivation Protocols

    PubMed Central

    Frachon, Emmanuel; Bondet, Vincent; Munier-Lehmann, Hélène; Bellalou, Jacques

    2006-01-01

    A multiple microfermentor battery was designed for high-throughput recombinant protein production in Escherichia coli. This novel system comprises eight aerated glass reactors with a working volume of 80 ml and a moving external optical sensor for measuring optical densities at 600 nm (OD600) ranging from 0.05 to 100 online. Each reactor can be fitted with miniature probes to monitor temperature, dissolved oxygen (DO), and pH. Independent temperature regulation for each vessel is obtained with heating/cooling Peltier devices. Data from pH, DO, and turbidity sensors are collected on a FieldPoint (National Instruments) I/O interface and are processed and recorded by a LabVIEW program on a personal computer, which enables feedback control of the culture parameters. A high-density medium formulation was designed, which enabled us to grow E. coli to OD600 up to 100 in batch cultures with oxygen-enriched aeration. Accordingly, the biomass and the amount of recombinant protein produced in a 70-ml culture were at least equivalent to the biomass and the amount of recombinant protein obtained in a Fernbach flask with 1 liter of conventional medium. Thus, the microfermentor battery appears to be well suited for automated parallel cultures and process optimization, such as that needed for structural genomics projects. PMID:16885269

  3. Automated Parallel Capillary Electrophoretic System

    DOEpatents

    Li, Qingbo; Kane, Thomas E.; Liu, Changsheng; Sonnenschein, Bernard; Sharer, Michael V.; Kernan, John R.

    2000-02-22

    An automated electrophoretic system is disclosed. The system employs a capillary cartridge having a plurality of capillary tubes. The cartridge has a first array of capillary ends projecting from one side of a plate. The first array of capillary ends are spaced apart in substantially the same manner as the wells of a microtitre tray of standard size. This allows one to simultaneously perform capillary electrophoresis on samples present in each of the wells of the tray. The system includes a stacked, dual carousel arrangement to eliminate cross-contamination resulting from reuse of the same buffer tray on consecutive executions from electrophoresis. The system also has a gel delivery module containing a gel syringe/a stepper motor or a high pressure chamber with a pump to quickly and uniformly deliver gel through the capillary tubes. The system further includes a multi-wavelength beam generator to generate a laser beam which produces a beam with a wide range of wavelengths. An off-line capillary reconditioner thoroughly cleans a capillary cartridge to enable simultaneous execution of electrophoresis with another capillary cartridge. The streamlined nature of the off-line capillary reconditioner offers the advantage of increased system throughput with a minimal increase in system cost.

  4. Automation, parallelism, and robotics for proteomics.

    PubMed

    Alterovitz, Gil; Liu, Jonathan; Chow, Jijun; Ramoni, Marco F

    2006-07-01

    The speed of the human genome project (Lander, E. S., Linton, L. M., Birren, B., Nusbaum, C. et al., Nature 2001, 409, 860-921) was made possible, in part, by developments in automation of sequencing technologies. Before these technologies, sequencing was a laborious, expensive, and personnel-intensive task. Similarly, automation and robotics are changing the field of proteomics today. Proteomics is defined as the effort to understand and characterize proteins in the categories of structure, function and interaction (Englbrecht, C. C., Facius, A., Comb. Chem. High Throughput Screen. 2005, 8, 705-715). As such, this field nicely lends itself to automation technologies since these methods often require large economies of scale in order to achieve cost and time-saving benefits. This article describes some of the technologies and methods being applied in proteomics in order to facilitate automation within the field as well as in linking proteomics-based information with other related research areas.

  5. At the intersection of automation and culture

    NASA Technical Reports Server (NTRS)

    Sherman, P. J.; Wiener, E. L.

    1995-01-01

    The crash of an automated passenger jet at Nagoya, Japan, in 1995, is used as an example of crew error in using automatic systems. Automation provides pilots with the ability to perform tasks in various ways. National culture is cited as a factor that affects how a pilot and crew interact with each other and equipment.

  6. At the intersection of automation and culture

    NASA Technical Reports Server (NTRS)

    Sherman, P. J.; Wiener, E. L.

    1995-01-01

    The crash of an automated passenger jet at Nagoya, Japan, in 1995, is used as an example of crew error in using automatic systems. Automation provides pilots with the ability to perform tasks in various ways. National culture is cited as a factor that affects how a pilot and crew interact with each other and equipment.

  7. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  8. Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Kwang, Abel

    1994-01-01

    This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  9. Automated Performance Prediction of Message Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Mehra, Pankaj; Sarukkai, Sekhar; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    As the trend toward massively parallel processing continues, the need for tools that can predict scalability trends becomes greater. While high level languages Eke HPF have come into greater use, explicit message-passing programs proliferate, and will probably do so for some time, thanks to the onslaught of standards such as MPI. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require a substantial manual effort to represent an application in the model's format. The YAPP ("Yet Another Performance Predictor") tool is an attempt to automate the formation of first-order expressions for completion time, with a minimum of programmer assistance. The content of this paper is as follows: First, we explore the implementation details of YAPP, and illustrate with examples some of the reasons that automatic prediction is difficult. In the following sections, we present the results of four applications, using execution traces on the Intel i860, analyze the error in YAPP's predictions, explain the limitations of our implementation, and mention possible future additions. In particular, we illustrate techniques used to identify pipeline communication patterns, and demonstrate how compiler analysis and regression are combined to automate the prediction process.

  10. Automated Performance Prediction of Message Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Mehra, Pankaj; Sarukkai, Sekhar; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    As the trend toward massively parallel processing continues, the need for tools that can predict scalability trends becomes greater. While high level languages Eke HPF have come into greater use, explicit message-passing programs proliferate, and will probably do so for some time, thanks to the onslaught of standards such as MPI. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require a substantial manual effort to represent an application in the model's format. The YAPP ("Yet Another Performance Predictor") tool is an attempt to automate the formation of first-order expressions for completion time, with a minimum of programmer assistance. The content of this paper is as follows: First, we explore the implementation details of YAPP, and illustrate with examples some of the reasons that automatic prediction is difficult. In the following sections, we present the results of four applications, using execution traces on the Intel i860, analyze the error in YAPP's predictions, explain the limitations of our implementation, and mention possible future additions. In particular, we illustrate techniques used to identify pipeline communication patterns, and demonstrate how compiler analysis and regression are combined to automate the prediction process.

  11. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  12. Can routine automated urinalysis reduce culture requests?

    PubMed

    Kayalp, Damla; Dogan, Kubra; Ceylan, Gozde; Senes, Mehmet; Yucel, Dogan

    2013-09-01

    There are a substantial number of unnecessary urine culture requests. We aimed to investigate whether urine dipstick and microscopy results could accurately rule out urinary tract infection (UTI) without urine culture. The study included a total of 32,998 patients (11,928 men and 21,070 women, mean age: 39 ± 32 years) with a preliminary diagnosis of UTI and both urinalysis and urinary culture were requested. All urine cultures were retrospectively reviewed; association of culture positivity with a positive urinalysis result for leukocyte esterase (LE) and nitrite in chemical analysis and pyuria (WBC) and bacteriuria in microscopy was determined. Diagnostic performance of urinalysis parameters for detection of UTI was evaluated. In total, 758 (2.3%) patients were positive by urine culture. Out of these culture positive samples, ratios of positive dipstick results for LE and nitrite were 71.0% (n=538) and 17.7% (n=134), respectively. The positive microscopy results for WBC and bacteria were 68.2% (n=517) and 78.8% (n=597), respectively. Negative predictive values for LE, nitrite, bacteriuria and WBC were very close to 100%. Most of the samples have no or insignificant bacterial growth. Urine dipstick and microscopy can accurately rule out UTI. Automated urinalysis is a practicable and faster screening test which may prevent unnecessary culture requests for majority of patients. © 2013. Published by Elsevier Inc. All rights reserved.

  13. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  14. Combinatorial parallel synthesis and automated screening of a novel class of liquid crystalline materials.

    PubMed

    Deeg, Oliver; Kirsch, Peer; Pauluth, Detlef; Bäuerle, Peter

    2002-12-07

    Combinatorial parallel synthesis has led to the rapid generation of a single-compound library of novel fluorinated quaterphenyls. Subsequent automated screening revealed liquid crystalline (LC) behaviour and gave qualitative relationships of molecular structures and solid state properties.

  15. Automating the selection of standard parallels for conic map projections

    NASA Astrophysics Data System (ADS)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  16. Automated Scalability Analysis Tools for Message Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekhar R.; Mehra, Pankaj; Tucker, Deanne (Technical Monitor)

    1994-01-01

    In order to develop scalable parallel applications, a number of programming decisions have to be made during the development of the program. Performance tools that help in making these decisions are few, if existent. Traditionally, performance tools have focused on exposing performance bottlenecks of small-scale executions of the program. However, it is common knowledge that programs that perform exceptionally well on small processor configurations, more often than not, perform poorly when executed on larger processor configurations. Hence, new tools that predict the execution characteristics of scaled-up programs are an essential part of an application developers toolkit. In this paper we discuss important issues that need to be considered in order to build useful scalability analysis tools for parallel programs. We introduce a simple tool that automatically extracts scalability characteristics of a class of deterministic parallel programs. We show with the help of a number of results on the Intel iPSC/860, that predictions are within reasonable bounds.

  17. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilties to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  18. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  19. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  20. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    SciTech Connect

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  1. Automated CFD Parameter Studies on Distributed Parallel Computers

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Aftosmis, Michael; Pandya, Shishir; Tejnil, Edward; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The objective of the current work is to build a prototype software system which will automated the process of running CFD jobs on Information Power Grid (IPG) resources. This system should remove the need for user monitoring and intervention of every single CFD job. It should enable the use of many different computers to populate a massive run matrix in the shortest time possible. Such a software system has been developed, and is known as the AeroDB script system. The approach taken for the development of AeroDB was to build several discrete modules. These include a database, a job-launcher module, a run-manager module to monitor each individual job, and a web-based user portal for monitoring of the progress of the parameter study. The details of the design of AeroDB are presented in the following section. The following section provides the results of a parameter study which was performed using AeroDB for the analysis of a reusable launch vehicle (RLV). The paper concludes with a section on the lessons learned in this effort, and ideas for future work in this area.

  2. An automated parallel simulation execution and analysis approach

    NASA Astrophysics Data System (ADS)

    Dallaire, Joel D.; Green, David M.; Reaper, Jerome H.

    2004-08-01

    State-of-the-art simulation computing requirements are continually approaching and then exceeding the performance capabilities of existing computers. This trend remains true even with huge yearly gains in processing power and general computing capabilities; simulation scope and fidelity often increases as well. Accordingly, simulation studies often expend days or weeks executing a single test case. Compounding the problem, stochastic models often require execution of each test case with multiple random number seeds to provide valid results. Many techniques have been developed to improve the performance of simulations without sacrificing model fidelity: optimistic simulation, distributed simulation, parallel multi-processing, and the use of supercomputers such as Beowulf clusters. An approach and prototype toolset has been developed that augments existing optimization techniques to improve multiple-execution timelines. This approach, similar in concept to the SETI @ home experiment, makes maximum use of unused licenses and computers, which can be geographically distributed. Using a publish/subscribe architecture, simulation executions are dispatched to distributed machines for execution. Simulation results are then processed, collated, and transferred to a single site for analysis.

  3. Automated method for fabrication of parallel multifiber cable assemblies with integral connector components

    NASA Astrophysics Data System (ADS)

    Lee, Nicholas A.; Igl, Scott A.; DeBaun, Barbara A.; Henson, Gordon D.; Smith, Terry L.

    1997-04-01

    The unrelenting demand for ever-higher data transfer rates between computing devices, coupled with the emerging ability to produce robust, monolithic arrays of optical sources and detectors has fueled the development of high-speed parallel optical data links, and created a need for connectorized, parallel, multifiber cable assemblies. An innovative approach to the cable assembly manufacturing process has been developed which incorporates the connector installation process into the cable fabrication process, thus enabling the production of connectorized cable assemblies in a continuous, automated manner. This cable assembly fabrication process, as well as critical details surrounding the process, will be discussed.

  4. Flexible automation of cell culture and tissue engineering tasks.

    PubMed

    Knoll, Alois; Scherer, Torsten; Poggendorf, Iris; Lütkemeyer, Dirk; Lehmann, Jürgen

    2004-01-01

    Until now, the predominant use cases of industrial robots have been routine handling tasks in the automotive industry. In biotechnology and tissue engineering, in contrast, only very few tasks have been automated with robots. New developments in robot platform and robot sensor technology, however, make it possible to automate plants that largely depend on human interaction with the production process, e.g., for material and cell culture fluid handling, transportation, operation of equipment, and maintenance. In this paper we present a robot system that lends itself to automating routine tasks in biotechnology but also has the potential to automate other production facilities that are similar in process structure. After motivating the design goals, we describe the system and its operation, illustrate sample runs, and give an assessment of the advantages. We conclude this paper by giving an outlook on possible further developments.

  5. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  6. The Protein Maker: an automated system for high-throughput parallel purification

    PubMed Central

    Smith, Eric R.; Begley, Darren W.; Anderson, Vanessa; Raymond, Amy C.; Haffner, Taryn E.; Robinson, John I.; Edwards, Thomas E.; Duncan, Natalie; Gerdts, Cory J.; Mixon, Mark B.; Nollert, Peter; Staker, Bart L.; Stewart, Lance J.

    2011-01-01

    The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications. PMID:21904043

  7. Comparison of manual and automated cultures of bone marrow stromal cells for bone tissue engineering.

    PubMed

    Akiyama, Hirokazu; Kobayashi, Asako; Ichimura, Masaki; Tone, Hiroshi; Nakatani, Masaru; Inoue, Minoru; Tojo, Arinobu; Kagami, Hideaki

    2015-11-01

    The development of an automated cell culture system would allow stable and economical cell processing for wider clinical applications in the field of regenerative medicine. However, it is crucial to determine whether the cells obtained by automated culture are comparable to those generated by manual culture. In the present study, we focused on the primary culture process of bone marrow stromal cells (BMSCs) for bone tissue engineering and investigated the feasibility of its automation using a commercially available automated cell culture system in a clinical setting. A comparison of the harvested BMSCs from manual and automated cultures using clinically acceptable protocols showed no differences in cell yields, viabilities, surface marker expression profiles, and in vivo osteogenic abilities. Cells cultured with this system also did not show malignant transformation and the automated process was revealed to be safe in terms of microbial contamination. Taken together, the automated procedure described in this report provides an approach to clinical bone tissue engineering.

  8. Automated harvesting and 2-step purification of unclarified mammalian cell-culture broths containing antibodies.

    PubMed

    Holenstein, Fabian; Eriksson, Christer; Erlandsson, Ioana; Norrman, Nils; Simon, Jill; Danielsson, Åke; Milicov, Adriana; Schindler, Patrick; Schlaeppi, Jean-Marc

    2015-10-30

    Therapeutic monoclonal antibodies represent one of the fastest growing segments in the pharmaceutical market. The growth of the segment has necessitated development of new efficient and cost saving platforms for the preparation and analysis of early candidates for faster and better antibody selection and characterization. We report on a new integrated platform for automated harvesting of whole unclarified cell-culture broths, followed by in-line tandem affinity-capture, pH neutralization and size-exclusion chromatography of recombinant antibodies expressed transiently in mammalian human embryonic kidney 293T-cells at the 1-L scale. The system consists of two bench-top chromatography instruments connected to a central unit with eight disposable filtration devices used for loading and filtering the cell cultures. The staggered parallel multi-step configuration of the system allows unattended processing of eight samples in less than 24h. The system was validated with a random panel of 45 whole-cell culture broths containing recombinant antibodies in the early profiling phase. The results showed that the overall performances of the preparative automated system were higher compared to the conventional downstream process including manual harvesting and purification. The mean recovery of purified material from the culture-broth was 66.7%, representing a 20% increase compared to that of the manual process. Moreover, the automated process reduced by 3-fold the amount of residual aggregates in the purified antibody fractions, indicating that the automated system allows the cost-efficient and timely preparation of antibodies in the 20-200mg range, and covers the requirements for early in vitro and in vivo profiling and formulation of these drug candidates. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  9. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  10. Anthropology and cultural neuroscience: creating productive intersections in parallel fields.

    PubMed

    Brown, R A; Seligman, R

    2009-01-01

    Partly due to the failure of anthropology to productively engage the fields of psychology and neuroscience, investigations in cultural neuroscience have occurred largely without the active involvement of anthropologists or anthropological theory. Dramatic advances in the tools and findings of social neuroscience have emerged in parallel with significant advances in anthropology that connect social and political-economic processes with fine-grained descriptions of individual experience and behavior. We describe four domains of inquiry that follow from these recent developments, and provide suggestions for intersections between anthropological tools - such as social theory, ethnography, and quantitative modeling of cultural models - and cultural neuroscience. These domains are: the sociocultural construction of emotion, status and dominance, the embodiment of social information, and the dual social and biological nature of ritual. Anthropology can help locate unique or interesting populations and phenomena for cultural neuroscience research. Anthropological tools can also help "drill down" to investigate key socialization processes accountable for cross-group differences. Furthermore, anthropological research points at meaningful underlying complexity in assumed relationships between social forces and biological outcomes. Finally, ethnographic knowledge of cultural content can aid with the development of ecologically relevant stimuli for use in experimental protocols.

  11. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    PubMed Central

    Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-01-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253

  12. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA.

    PubMed

    Yu, Zeta Tak For; Guan, Huijiao; Cheung, Mei Ki; McHugh, Walker M; Cornell, Timothy T; Shanley, Thomas P; Kurabayashi, Katsuo; Fu, Jianping

    2015-06-15

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology ('AlphaLISA') in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL(-1). The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications.

  13. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    NASA Astrophysics Data System (ADS)

    TakYu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-06-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL-1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications.

  14. Effects of ATC automation on precision approaches to closely space parallel runways

    NASA Technical Reports Server (NTRS)

    Slattery, R.; Lee, K.; Sanford, B.

    1995-01-01

    Improved navigational technology (such as the Microwave Landing System and the Global Positioning System) installed in modern aircraft will enable air traffic controllers to better utilize available airspace. Consequently, arrival traffic can fly approaches to parallel runways separated by smaller distances than are currently allowed. Previous simulation studies of advanced navigation approaches have found that controller workload is increased when there is a combination of aircraft that are capable of following advanced navigation routes and aircraft that are not. Research into Air Traffic Control automation at Ames Research Center has led to the development of the Center-TRACON Automation System (CTAS). The Final Approach Spacing Tool (FAST) is the component of the CTAS used in the TRACON area. The work in this paper examines, via simulation, the effects of FAST used for aircraft landing on closely spaced parallel runways. The simulation contained various combinations of aircraft, equipped and unequipped with advanced navigation systems. A set of simulations was run both manually and with an augmented set of FAST advisories to sequence aircraft, assign runways, and avoid conflicts. The results of the simulations are analyzed, measuring the airport throughput, aircraft delay, loss of separation, and controller workload.

  15. Optoelectronic parallel processing with smart pixel arrays for automated screening of cervical smear imagery

    NASA Astrophysics Data System (ADS)

    Metz, John Langdon

    2000-10-01

    This thesis investigates the use of optoelectronic parallel processing systems with smart photosensor arrays (SPAs) to examine cervical smear images. The automation of cervical smear screening seeks to reduce human workload and improve the accuracy of detecting pre- cancerous and cancerous conditions. Increasing the parallelism of image processing improves the speed and accuracy of locating regions-of-interest (ROI) from images of the cervical smear for the first stage of a two-stage screening system. The two-stage approach first detects ROI optoelectronically before classifying them using more time consuming electronic algorithms. The optoelectronic hit/miss transform (HMT) is computed using gray scale modulation spatial light modulators in an optical correlator. To further the parallelism of this system, a novel CMOS SPA computes the post processing steps required by the HMT algorithm. The SPA reduces the subsequent bandwidth passed into the second, electronic image processing stage classifying the detected ROI. Limitations in the miss operation of the HMT suggest using only the hit operation for detecting ROI. This makes possible a single SPA chip approach using only the hit operation for ROI detection which may replace the optoelectronic correlator in the screening system. Both the HMT SPA postprocessor and the SPA ROI detector design provide compact, efficient, and low-cost optoelectronic solutions to performing ROI detection on cervical smears. Analysis of optoelectronic ROI detection with electronic ROI classification shows these systems have the potential to perform at, or above, the current error rates for manual classification of cervical smears.

  16. Final report for''automated diagnosis of large scale parallel applications''

    SciTech Connect

    Karavanic, K L

    2000-11-17

    The work performed is part of a continuing research project, PPerfDB, headed by Dr. Karavanic. We are studying the application of experiment management techniques to the problems associated with gathering, storing, and using performance data with the goal of achieving completely automated diagnosis of application and system bottlenecks. This summer we focused on incorporating heterogeneous data from a variety of tools, applications, and platforms, and on designing novel techniques for automated performance diagnosis. The Experiment Management paradigm is a useful approach for designing a tool that will automatically diagnose performance problems in large-scale parallel applications. The ability to gather, store, and use performance data gathered over time from different executions and using different collection tools enables more sophisticated approaches to performance diagnosis and to performance evaluation more generally. We look forward to continuing our efforts by further development and analysis of online diagnosis using historical data, and by investigating performance data and diagnosis gathered from mixed MPUOpenMP applications.

  17. Digital microfluidics for automated hanging drop cell spheroid culture.

    PubMed

    Aijian, Andrew P; Garrell, Robin L

    2015-06-01

    Cell spheroids are multicellular aggregates, grown in vitro, that mimic the three-dimensional morphology of physiological tissues. Although there are numerous benefits to using spheroids in cell-based assays, the adoption of spheroids in routine biomedical research has been limited, in part, by the tedious workflow associated with spheroid formation and analysis. Here we describe a digital microfluidic platform that has been developed to automate liquid-handling protocols for the formation, maintenance, and analysis of multicellular spheroids in hanging drop culture. We show that droplets of liquid can be added to and extracted from through-holes, or "wells," and fabricated in the bottom plate of a digital microfluidic device, enabling the formation and assaying of hanging drops. Using this digital microfluidic platform, spheroids of mouse mesenchymal stem cells were formed and maintained in situ for 72 h, exhibiting good viability (>90%) and size uniformity (% coefficient of variation <10% intraexperiment, <20% interexperiment). A proof-of-principle drug screen was performed on human colorectal adenocarcinoma spheroids to demonstrate the ability to recapitulate physiologically relevant phenomena such as insulin-induced drug resistance. With automatable and flexible liquid handling, and a wide range of in situ sample preparation and analysis capabilities, the digital microfluidic platform provides a viable tool for automating cell spheroid culture and analysis.

  18. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Ng, K.T.; Nadeem, A.

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  19. Functionalized Polymers-Emerging Versatile Tools for Solution-Phase Chemistry and Automated Parallel Synthesis.

    PubMed

    Kirschning, Andreas; Monenschein, Holger; Wittenberg, Rüdiger

    2001-02-16

    As part of the dramatic changes associated with the need for preparing compound libraries in pharmaceutical and agrochemical research laboratories, industry searches for new technologies that allow for the automation of synthetic processes. Since the pioneering work by Merrifield polymeric supports have been identified to play a key role in this field however, polymer-assisted solution-phase synthesis which utilizes immobilized reagents and catalysts has only recently begun to flourish. Polymer-assisted solution-phase synthesis has various advantages over conventional solution-phase chemistry, such as the ease of separation of the supported species from a reaction mixture by filtration and washing, the opportunity to use an excess of the reagent to force the reaction to completion without causing workup problems, and the adaptability to continuous-flow processes. Various strategies for employing functionalized polymers stoichiometrically have been developed. Apart from reagents that are covalently or ionically attached to the polymeric backbone and which are released into solution in the presence of a suitable substrate, scavenger reagents play an increasingly important role in purifying reaction mixtures. Employing functionalized polymers in solution-phase synthesis has been shown to be extremely useful in automated parallel synthesis and multistep sequences. So far, compound libraries containing as many as 88 members have been generated by using several polymer-bound reagents one after another. Furthermore, it has been demonstrated that complex natural products like the alkaloids (+/-)-oxomaritidine and (+/-)-epimaritidine can be prepared by a sequence of five and six consecutive polymer-assisted steps, respectively, and the potent analgesic compound (+/-)-epibatidine in twelve linear steps ten of which are based on functionalized polymers. These developments reveal the great future prospects of polymer-assisted solution-phase synthesis.

  20. Automated integration of genomic physical mapping data via parallel simulated annealing

    SciTech Connect

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  1. Parallelization and High-Performance Computing Enables Automated Statistical Inference of Multi-scale Models.

    PubMed

    Jagiella, Nick; Rickert, Dennis; Theis, Fabian J; Hasenauer, Jan

    2017-02-22

    Mechanistic understanding of multi-scale biological processes, such as cell proliferation in a changing biological tissue, is readily facilitated by computational models. While tools exist to construct and simulate multi-scale models, the statistical inference of the unknown model parameters remains an open problem. Here, we present and benchmark a parallel approximate Bayesian computation sequential Monte Carlo (pABC SMC) algorithm, tailored for high-performance computing clusters. pABC SMC is fully automated and returns reliable parameter estimates and confidence intervals. By running the pABC SMC algorithm for ∼10(6) hr, we parameterize multi-scale models that accurately describe quantitative growth curves and histological data obtained in vivo from individual tumor spheroid growth in media droplets. The models capture the hybrid deterministic-stochastic behaviors of 10(5)-10(6) of cells growing in a 3D dynamically changing nutrient environment. The pABC SMC algorithm reliably converges to a consistent set of parameters. Our study demonstrates a proof of principle for robust, data-driven modeling of multi-scale biological systems and the feasibility of multi-scale model parameterization through statistical inference.

  2. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    PubMed Central

    Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248

  3. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    NASA Technical Reports Server (NTRS)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  4. Long-term maintenance of human induced pluripotent stem cells by automated cell culture system

    PubMed Central

    Konagaya, Shuhei; Ando, Takeshi; Yamauchi, Toshiaki; Suemori, Hirofumi; Iwata, Hiroo

    2015-01-01

    Pluripotent stem cells, such as embryonic stem cells and induced pluripotent stem (iPS) cells, are regarded as new sources for cell replacement therapy. These cells can unlimitedly expand under undifferentiated conditions and be differentiated into multiple cell types. Automated culture systems enable the large-scale production of cells. In addition to reducing the time and effort of researchers, an automated culture system improves the reproducibility of cell cultures. In the present study, we newly designed a fully automated cell culture system for human iPS maintenance. Using an automated culture system, hiPS cells maintained their undifferentiated state for 60 days. Automatically prepared hiPS cells had a potency of differentiation into three germ layer cells including dopaminergic neurons and pancreatic cells. PMID:26573336

  5. Long-term maintenance of human induced pluripotent stem cells by automated cell culture system.

    PubMed

    Konagaya, Shuhei; Ando, Takeshi; Yamauchi, Toshiaki; Suemori, Hirofumi; Iwata, Hiroo

    2015-11-17

    Pluripotent stem cells, such as embryonic stem cells and induced pluripotent stem (iPS) cells, are regarded as new sources for cell replacement therapy. These cells can unlimitedly expand under undifferentiated conditions and be differentiated into multiple cell types. Automated culture systems enable the large-scale production of cells. In addition to reducing the time and effort of researchers, an automated culture system improves the reproducibility of cell cultures. In the present study, we newly designed a fully automated cell culture system for human iPS maintenance. Using an automated culture system, hiPS cells maintained their undifferentiated state for 60 days. Automatically prepared hiPS cells had a potency of differentiation into three germ layer cells including dopaminergic neurons and pancreatic cells.

  6. Comparisons between Classical Test Theory and Item Response Theory in Automated Assembly of Parallel Test Forms

    ERIC Educational Resources Information Center

    Lin, Chuan-Ju

    2008-01-01

    The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…

  7. National culture and flight deck automation: results of a multination survey.

    PubMed

    Sherman, P J; Helmreich, R L; Merritt, A C

    1997-01-01

    Attitudes regarding flight deck automation were surveyed in a sample of 5,879 airline pilots from 12 nations. The average difference in endorsement levels across 11 items for pilots flying automated aircraft in 12 nations was 53%, reflecting significant national differences in attitudes on all items, with the largest differences observed for preference and enthusiasm for automation. The range of agreement across nations was on average four times larger than the range of agreement across different airlines within the same nation, and roughly six times larger than the range across pilots of standard and pilots of automated aircraft. Patterns of response are described in terms of dimensions of national culture. Implications of the results for development of safety cultures and culturally sensitive training are discussed.

  8. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    SciTech Connect

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary; Liu, Mingliang; Logan, Jeremy S; Podhorszki, Norbert; Choi, Jong Youl; Klasky, Scott A

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.

  9. Reproducible culture and differentiation of mouse embryonic stem cells using an automated microwell platform.

    PubMed

    Hussain, Waqar; Moens, Nathalie; Veraitch, Farlan S; Hernandez, Diana; Mason, Chris; Lye, Gary J

    2013-08-15

    The use of embryonic stem cells (ESCs) and their progeny in high throughput drug discovery and regenerative medicine will require production at scale of well characterized cells at an appropriate level of purity. The adoption of automated bioprocessing techniques offers the possibility to overcome the lack of consistency and high failure rates seen with current manual protocols. To build the case for increased use of automation this work addresses the key question: "can an automated system match the quality of a highly skilled and experienced person working manually?" To answer this we first describe an integrated automation platform designed for the 'hands-free' culture and differentiation of ESCs in microwell formats. Next we outline a framework for the systematic investigation and optimization of key bioprocess variables for the rapid establishment of validatable Standard Operating Procedures (SOPs). Finally the experimental comparison between manual and automated bioprocessing is exemplified by expansion of the murine Oct-4-GiP ESC line over eight sequential passages with their subsequent directed differentiation into neural precursors. Our results show that ESCs can be effectively maintained and differentiated in a highly reproducible manner by the automated system described. Statistical analysis of the results for cell growth over single and multiple passages shows up to a 3-fold improvement in the consistency of cell growth kinetics with automated passaging. The quality of the cells produced was evaluated using a panel of biological markers including cell growth rate and viability, nutrient and metabolite profiles, changes in gene expression and immunocytochemistry. Automated processing of the ESCs had no measurable negative effect on either their pluripotency or their ability to differentiate into the three embryonic germ layers. Equally important is that over a 6-month period of culture without antibiotics in the medium, we have not had any cases of

  10. Fully automated cellular-resolution vertebrate screening platform with parallel animal processing.

    PubMed

    Chang, Tsung-Yao; Pardo-Martin, Carlos; Allalou, Amin; Wählby, Carolina; Yanik, Mehmet Fatih

    2012-02-21

    The zebrafish larva is an optically-transparent vertebrate model with complex organs that is widely used to study genetics, developmental biology, and to model various human diseases. In this article, we present a set of novel technologies that significantly increase the throughput and capabilities of our previously described vertebrate automated screening technology (VAST). We developed a robust multi-thread system that can simultaneously process multiple animals. System throughput is limited only by the image acquisition speed rather than by the fluidic or mechanical processes. We developed image recognition algorithms that fully automate manipulation of animals, including orienting and positioning regions of interest within the microscope's field of view. We also identified the optimal capillary materials for high-resolution, distortion-free, low-background imaging of zebrafish larvae.

  11. Fully automated cellular-resolution vertebrate screening platform with parallel animal processing

    PubMed Central

    Chang, Tsung-Yao; Pardo-Martin, Carlos; Allalou, Amin; Wählby, Carolina; Yanik, Mehmet Fatih

    2012-01-01

    The zebrafish larva is an optically-transparent vertebrate model with complex organs that is widely used to study genetics, developmental biology, and to model various human diseases. In this article, we present a set of novel technologies that significantly increase the throughput and capabilities of previously described vertebrate automated screening technology (VAST). We developed a robust multi-thread system that can simultaneously process multiple animals. System throughput is limited only by the image acquisition speed rather than by the fluidic or mechanical processes. We developed image recognition algorithms that fully automate manipulation of animals, including orienting and positioning regions of interest within the microscope’s field of view. We also identified the optimal capillary materials for high-resolution, distortion-free, low-background imaging of zebrafish larvae. PMID:22159032

  12. An automated method for autoradiographic analysis of cultured Schwann cells.

    PubMed

    Baichwal, R R; Yan, L; Bosler, A; DeVries, G H

    1987-08-01

    A semi-automated analysis system based on video image analysis was developed to count labelled and unlabelled nuclei of Schwann cells which had been exposed to tritiated thymidine followed by processing for autoradiography. A Model 3000 Image Analysis system (Image Technology Corporation, Deer Park, NY) was used to acquire and process the images and provide quantitative measurements based on the distinctive size and shape of the Schwann cell nucleus. The maximum and minimum dimensions for the labelled and unlabelled nuclei were determined. These stored dimensional parameters were then compared with the dimensions of a given field of cell nuclei by the image analysis system. The counts from various fields were collected until a total of 1000 labelled and unlabelled nuclei had been analyzed. A labelling index (LI = ratio of labelled cells to total cells X 100) was then calculated and printed by the system. LIs of autoradiographs determined by automated analysis correlated well with those determined by visual cell counting. The principle of the image analysis program as described here is applicable to other systems for the measurement of LIs of a particular cell type in a mixed population. This automated process eliminates both the subjectivity and fatigue of visual counting and facilitates the rapid measurement of the LI of large numbers of autoradiographs with precision.

  13. Culture medium optimization for osmotolerant yeasts by use of a parallel fermenter system and rapid microbiological testing.

    PubMed

    Pfannebecker, Jens; Schiffer-Hetz, Claudia; Fröhlich, Jürgen; Becker, Barbara

    2016-11-01

    In the present study, a culture medium for qualitative detection of osmotolerant yeasts, named OM, was developed. For the development, culture media with different concentrations of glucose, fructose, potassium chloride and glycerin were analyzed in a Biolumix™ test incubator. Selectivity for osmotolerant yeasts was guaranteed by a water activity (aw)-value of 0.91. The best results regarding fast growth of Zygosaccharomyces rouxii (WH 1002) were achieved in a culture medium consisting of 45% glucose, 5% fructose and 0.5% yeast extract and in a medium with 30% glucose, 10% glycerin, 5% potassium chloride and 0.5% yeast extract. Substances to stimulate yeast fermentation rates were analyzed in a RAMOS(®) parallel fermenter system, enabling online measurement of the carbon dioxide transfer rate (CTR) in shaking flasks. Significant increases of the CTR was achieved by adding especially 0.1-0.2% ammonium salts ((NH4)2HPO4, (NH4)2SO4 or NH4NO3), 0.5% meat peptone and 1% malt extract. Detection times and the CTR of 23 food-borne yeast strains of the genera Zygosaccharomyces, Torulaspora, Schizosaccharomyces, Candida and Wickerhamomyces were analyzed in OM bouillon in comparison to the selective culture media YEG50, MYG50 and DG18 in the parallel fermenter system. The OM culture medium enabled the detection of 10(2)CFU/g within a time period of 2-3days, depending on the analyzed yeast species. Compared with YEG50 and MYG50 the detection times could be reduced. As an example, W. anomalus (WH 1021) was detected after 124h in YEG50, 95.5h in MYG50 and 55h in OM bouillon. Compared to YEG50 the maximum CO2 transfer rates for Z. rouxii (WH 1001), T. delbrueckii (DSM 70526), S. pombe (DSM 70576) and W. anomalus (WH 1016) increased by a factor ≥2.6. Furthermore, enrichment cultures of inoculated high-sugar products in OM culture medium were analyzed in the Biolumix™ system. The results proved that detection times of 3days for Z. rouxii and T. delbrueckii can be realized by

  14. An Automated Test Development of Parallel Tests from a Seed Test.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; And Others

    1992-01-01

    A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)

  15. 5-aroyl-3,4-dihydropyrimidin-2-one library generation via automated sequential and parallel microwave-assisted synthesis techniques.

    PubMed

    Pisani, Leonardo; Prokopcová, Hana; Kremsner, Jennifer M; Kappe, C Oliver

    2007-01-01

    An efficient two-step synthetic pathway toward the preparation of diversely substituted 5-aroyl-3,4-dihydropyrimidin-2-ones is realized. The protocol involves an initial trimethylsilyl chloride-mediated Biginelli multicomponent reaction involving S-ethyl acetothioacetate, aromatic aldehydes, and ureas as building blocks to generate a set of 3,4-dihydropyrimidine-5-carboxylic acid thiol esters. These thiol esters serve as starting materials for a subsequent Pd-catalyzed Cu-mediated Liebeskind-Srogl cross-coupling reaction with boronic acids to provide the desired 5-aroyl-3,4-dihydropyrimidin-2-one derivatives. Both steps were performed using microwave heating in sealed vessels, either in an automated sequential or parallel format using dedicated microwave reactor instrumentation. A diverse library of 30 5-aroyl-3,4-dihydropyrimidin-2-ones was prepared with commercially available aldehyde, urea, and boronic acid building blocks as starting materials.

  16. Prioritizing multiple therapeutic targets in parallel using automated DNA-encoded library screening

    PubMed Central

    Machutta, Carl A.; Kollmann, Christopher S.; Lind, Kenneth E.; Bai, Xiaopeng; Chan, Pan F.; Huang, Jianzhong; Ballell, Lluis; Belyanskaya, Svetlana; Besra, Gurdyal S.; Barros-Aguirre, David; Bates, Robert H.; Centrella, Paolo A.; Chang, Sandy S.; Chai, Jing; Choudhry, Anthony E.; Coffin, Aaron; Davie, Christopher P.; Deng, Hongfeng; Deng, Jianghe; Ding, Yun; Dodson, Jason W.; Fosbenner, David T.; Gao, Enoch N.; Graham, Taylor L.; Graybill, Todd L.; Ingraham, Karen; Johnson, Walter P.; King, Bryan W.; Kwiatkowski, Christopher R.; Lelièvre, Joël; Li, Yue; Liu, Xiaorong; Lu, Quinn; Lehr, Ruth; Mendoza-Losana, Alfonso; Martin, John; McCloskey, Lynn; McCormick, Patti; O’Keefe, Heather P.; O’Keeffe, Thomas; Pao, Christina; Phelps, Christopher B.; Qi, Hongwei; Rafferty, Keith; Scavello, Genaro S.; Steiginga, Matt S.; Sundersingh, Flora S.; Sweitzer, Sharon M.; Szewczuk, Lawrence M.; Taylor, Amy; Toh, May Fern; Wang, Juan; Wang, Minghui; Wilkins, Devan J.; Xia, Bing; Yao, Gang; Zhang, Jean; Zhou, Jingye; Donahue, Christine P.; Messer, Jeffrey A.; Holmes, David; Arico-Muendel, Christopher C.; Pope, Andrew J.; Gross, Jeffrey W.; Evindar, Ghotas

    2017-01-01

    The identification and prioritization of chemically tractable therapeutic targets is a significant challenge in the discovery of new medicines. We have developed a novel method that rapidly screens multiple proteins in parallel using DNA-encoded library technology (ELT). Initial efforts were focused on the efficient discovery of antibacterial leads against 119 targets from Acinetobacter baumannii and Staphylococcus aureus. The success of this effort led to the hypothesis that the relative number of ELT binders alone could be used to assess the ligandability of large sets of proteins. This concept was further explored by screening 42 targets from Mycobacterium tuberculosis. Active chemical series for six targets from our initial effort as well as three chemotypes for DHFR from M. tuberculosis are reported. The findings demonstrate that parallel ELT selections can be used to assess ligandability and highlight opportunities for successful lead and tool discovery. PMID:28714473

  17. Prioritizing multiple therapeutic targets in parallel using automated DNA-encoded library screening

    NASA Astrophysics Data System (ADS)

    Machutta, Carl A.; Kollmann, Christopher S.; Lind, Kenneth E.; Bai, Xiaopeng; Chan, Pan F.; Huang, Jianzhong; Ballell, Lluis; Belyanskaya, Svetlana; Besra, Gurdyal S.; Barros-Aguirre, David; Bates, Robert H.; Centrella, Paolo A.; Chang, Sandy S.; Chai, Jing; Choudhry, Anthony E.; Coffin, Aaron; Davie, Christopher P.; Deng, Hongfeng; Deng, Jianghe; Ding, Yun; Dodson, Jason W.; Fosbenner, David T.; Gao, Enoch N.; Graham, Taylor L.; Graybill, Todd L.; Ingraham, Karen; Johnson, Walter P.; King, Bryan W.; Kwiatkowski, Christopher R.; Lelièvre, Joël; Li, Yue; Liu, Xiaorong; Lu, Quinn; Lehr, Ruth; Mendoza-Losana, Alfonso; Martin, John; McCloskey, Lynn; McCormick, Patti; O'Keefe, Heather P.; O'Keeffe, Thomas; Pao, Christina; Phelps, Christopher B.; Qi, Hongwei; Rafferty, Keith; Scavello, Genaro S.; Steiginga, Matt S.; Sundersingh, Flora S.; Sweitzer, Sharon M.; Szewczuk, Lawrence M.; Taylor, Amy; Toh, May Fern; Wang, Juan; Wang, Minghui; Wilkins, Devan J.; Xia, Bing; Yao, Gang; Zhang, Jean; Zhou, Jingye; Donahue, Christine P.; Messer, Jeffrey A.; Holmes, David; Arico-Muendel, Christopher C.; Pope, Andrew J.; Gross, Jeffrey W.; Evindar, Ghotas

    2017-07-01

    The identification and prioritization of chemically tractable therapeutic targets is a significant challenge in the discovery of new medicines. We have developed a novel method that rapidly screens multiple proteins in parallel using DNA-encoded library technology (ELT). Initial efforts were focused on the efficient discovery of antibacterial leads against 119 targets from Acinetobacter baumannii and Staphylococcus aureus. The success of this effort led to the hypothesis that the relative number of ELT binders alone could be used to assess the ligandability of large sets of proteins. This concept was further explored by screening 42 targets from Mycobacterium tuberculosis. Active chemical series for six targets from our initial effort as well as three chemotypes for DHFR from M. tuberculosis are reported. The findings demonstrate that parallel ELT selections can be used to assess ligandability and highlight opportunities for successful lead and tool discovery.

  18. Fully automated single-use stirred-tank bioreactors for parallel microbial cultivations.

    PubMed

    Kusterer, Andreas; Krause, Christian; Kaufmann, Klaus; Arnold, Matthias; Weuster-Botz, Dirk

    2008-04-01

    Single-use stirred tank bioreactors on a 10-mL scale operated in a magnetic-inductive bioreaction block for 48 bioreactors were equipped with individual stirrer-speed tracing, as well as individual DO- and pH-monitoring and control. A Hall-effect sensor system was integrated into the bioreaction block to measure individually the changes in magnetic field density caused by the rotating permanent magnets. A restart of the magnetic inductive drive was initiated automatically each time a Hall-effect sensor indicates one non-rotating gas-inducing stirrer. Individual DO and pH were monitored online by measuring the fluorescence decay time of two chemical sensors immobilized at the bottom of each single-use bioreactor. Parallel DO measurements were shown to be very reliable and independently from the fermentation media applied in this study for the cultivation of Escherichia coli and Saccharomyces cerevisiae. The standard deviation of parallel pH measurements was pH 0.1 at pH 7.0 at the minimum and increased to a standard deviation of pH 0.2 at pH 6.0 or at pH 8.5 with the complex medium applied for fermentations with S. cerevisiae. Parallel pH-control was thus shown to be meaningful with a tolerance band around the pH set-point of +/- pH 0.2 if the set-point is pH 6.0 or lower.

  19. Pursuing Darwin’s curious parallel: Prospects for a science of cultural evolution

    PubMed Central

    2017-01-01

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities. PMID:28739929

  20. Pursuing Darwin's curious parallel: Prospects for a science of cultural evolution.

    PubMed

    Mesoudi, Alex

    2017-07-24

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities.

  1. Automation of Molecular-Based Analyses: A Primer on Massively Parallel Sequencing

    PubMed Central

    Nguyen, Lan; Burnett, Leslie

    2014-01-01

    Recent advances in genetics have been enabled by new genetic sequencing techniques called massively parallel sequencing (MPS) or next-generation sequencing. Through the ability to sequence in parallel hundreds of thousands to millions of DNA fragments, the cost and time required for sequencing has dramatically decreased. There are a number of different MPS platforms currently available and being used in Australia. Although they differ in the underlying technology involved, their overall processes are very similar: DNA fragmentation, adaptor ligation, immobilisation, amplification, sequencing reaction and data analysis. MPS is being used in research, translational and increasingly now also in clinical settings. Common applications include sequencing of whole genomes, whole exomes or targeted genes for disease-causing gene discovery, genetic diagnosis and targeted cancer therapy. Even though the revolution that is occurring with MPS is exciting due to its increasing use, improving and emerging technologies and new applications, significant challenges still exist. Particularly challenging issues are the bioinformatics required for data analysis, interpretation of results and the ethical dilemma of ‘incidental findings’. PMID:25336762

  2. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    PubMed

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze

    1998-01-01

    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc.

  3. Impact of Implementation of an Automated Liquid Culture System on Diagnosis of Tuberculous Pleurisy.

    PubMed

    Lee, Byung Hee; Yoon, Seong Hoon; Yeo, Hye Ju; Kim, Dong Wan; Lee, Seung Eun; Cho, Woo Hyun; Lee, Su Jin; Kim, Yun Seong; Jeon, Doosoo

    2015-07-01

    This study was conducted to evaluate the impact of implementation of an automated liquid culture system on the diagnosis of tuberculous pleurisy in an HIV-uninfected patient population. We retrospectively compared the culture yield, time to positivity, and contamination rate of pleural effusion samples in the BACTEC Mycobacteria Growth Indicator Tube 960 (MGIT) and Ogawa media among patients with tuberculous pleurisy. Out of 104 effusion samples, 43 (41.3%) were culture positive on either the MGIT or the Ogawa media. The culture yield of MGIT was higher (40.4%, 42/104) than that of Ogawa media (18.3%, 19/104) (P<0.001). One of the samples was positive only on the Ogawa medium. The median time to positivity was faster in the MGIT (18 days, range 8-32 days) than in the Ogawa media (37 days, range 20-59 days) (P<0.001). No contamination or growth of nontuberculous mycobacterium was observed on either of the culture media. In conclusion, the automated liquid culture system could provide approximately twice as high yields and fast results in effusion culture, compared to solid media. Supplemental solid media may have a limited impact on maximizing sensitivity in effusion culture; however, further studies are required.

  4. Automated and online characterization of adherent cell culture growth in a microfabricated bioreactor.

    PubMed

    Jaccard, Nicolas; Macown, Rhys J; Super, Alexandre; Griffin, Lewis D; Veraitch, Farlan S; Szita, Nicolas

    2014-10-01

    Adherent cell lines are widely used across all fields of biology, including drug discovery, toxicity studies, and regenerative medicine. However, adherent cell processes are often limited by a lack of advances in cell culture systems. While suspension culture processes benefit from decades of development of instrumented bioreactors, adherent cultures are typically performed in static, noninstrumented flasks and well-plates. We previously described a microfabricated bioreactor that enables a high degree of control on the microenvironment of the cells while remaining compatible with standard cell culture protocols. In this report, we describe its integration with automated image-processing capabilities, allowing the continuous monitoring of key cell culture characteristics. A machine learning-based algorithm enabled the specific detection of one cell type within a co-culture setting, such as human embryonic stem cells against the background of fibroblast cells. In addition, the algorithm did not confuse image artifacts resulting from microfabrication, such as scratches on surfaces, or dust particles, with cellular features. We demonstrate how the automation of flow control, environmental control, and image acquisition can be employed to image the whole culture area and obtain time-course data of mouse embryonic stem cell cultures, for example, for confluency.

  5. Automated and Online Characterization of Adherent Cell Culture Growth in a Microfabricated Bioreactor

    PubMed Central

    Jaccard, Nicolas; Macown, Rhys J.; Super, Alexandre; Griffin, Lewis D.; Veraitch, Farlan S.

    2014-01-01

    Adherent cell lines are widely used across all fields of biology, including drug discovery, toxicity studies, and regenerative medicine. However, adherent cell processes are often limited by a lack of advances in cell culture systems. While suspension culture processes benefit from decades of development of instrumented bioreactors, adherent cultures are typically performed in static, noninstrumented flasks and well-plates. We previously described a microfabricated bioreactor that enables a high degree of control on the microenvironment of the cells while remaining compatible with standard cell culture protocols. In this report, we describe its integration with automated image-processing capabilities, allowing the continuous monitoring of key cell culture characteristics. A machine learning–based algorithm enabled the specific detection of one cell type within a co-culture setting, such as human embryonic stem cells against the background of fibroblast cells. In addition, the algorithm did not confuse image artifacts resulting from microfabrication, such as scratches on surfaces, or dust particles, with cellular features. We demonstrate how the automation of flow control, environmental control, and image acquisition can be employed to image the whole culture area and obtain time-course data of mouse embryonic stem cell cultures, for example, for confluency. PMID:24692228

  6. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  7. Effects of diluents on cell culture viability measured by automated cell counter

    PubMed Central

    Chen, Aaron; Leith, Matthew; Tu, Roger; Tahim, Gurpreet; Sudra, Anish; Bhargava, Swapnil

    2017-01-01

    Commercially available automated cell counters based on trypan blue dye-exclusion are widely used in industrial cell culture process development and manufacturing to increase throughput and eliminate inherent variability in subjective interpretation associated with manual hemocytometers. When using these cell counters, sample dilution is often necessary to stay within the assay measurement range; however, the effect of time and diluents on cell culture is not well understood. This report presents the adverse effect of phosphate buffered saline as a diluent on cell viability when used in combination with an automated cell counter. The reduced cell viability was attributed to shear stress introduced by the automated cell counter. Furthermore, length of time samples were incubated in phosphate buffered saline also contributed to the observed drop in cell viability. Finally, as erroneous viability measurements can severely impact process decisions and product quality, this report identifies several alternative diluents that can maintain cell culture viability over time in order to ensure accurate representation of cell culture conditions. PMID:28264018

  8. Explorations of Space-Charge Limits in Parallel-Plate Diodes and Associated Techniques for Automation

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, Benjamin

    Space-charge limited flow is a topic of much interest and varied application. We extend existing understanding of space-charge limits by simulations, and develop new tools and techniques for doing these simulations along the way. The Child-Langmuir limit is a simple analytic solution for space-charge limited current density in a one-dimensional diode. It has been previously extended to two dimensions by numerical calculation in planar geometries. By considering an axisymmetric cylindrical system with axial emission from a circular cathode of finite radius r and outer drift tube R > r and gap length L, we further examine the space charge limit in two dimensions. We simulate a two-dimensional axisymmetric parallel plate diode of various aspect ratios (r/L), and develop a scaling law for the measured two-dimensional space-charge limit (2DSCL) relative to the Child-Langmuir limit as a function of the aspect ratio of the diode. These simulations are done with a large (100T) longitudinal magnetic field to restrict electron motion to 1D, with the two-dimensional particle-in-cell simulation code OOPIC. We find a scaling law that is a monotonically decreasing function of this aspect ratio, and the one-dimensional result is recovered in the limit as r >> L. The result is in good agreement with prior results in planar geometry, where the emission area is proportional to the cathode width. We find a weak contribution from the effects of the drift tube for current at the beam edge, and a strong contribution of high current-density "wings" at the outer-edge of the beam, with a very large relative contribution when the beam is narrow. Mechanisms for enhancing current beyond the Child-Langmuir limit remain a matter of great importance. We analyze the enhancement effects of upstream ion injection on the transmitted current in a one-dimensional parallel plate diode. Electrons are field-emitted at the cathode, and ions are injected at a controlled current from the anode. An analytic

  9. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    NASA Technical Reports Server (NTRS)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  10. Automated dynamic fed-batch process and media optimization for high productivity cell culture process development.

    PubMed

    Lu, Franklin; Toh, Poh Choo; Burnett, Iain; Li, Feng; Hudson, Terry; Amanullah, Ashraf; Li, Jincai

    2013-01-01

    Current industry practices for large-scale mammalian cell cultures typically employ a standard platform fed-batch process with fixed volume bolus feeding. Although widely used, these processes are unable to respond to actual nutrient consumption demands from the culture, which can result in accumulation of by-products and depletion of certain nutrients. This work demonstrates the application of a fully automated cell culture control, monitoring, and data processing system to achieve significant productivity improvement via dynamic feeding and media optimization. Two distinct feeding algorithms were used to dynamically alter feed rates. The first method is based upon on-line capacitance measurements where cultures were fed based on growth and nutrient consumption rates estimated from integrated capacitance. The second method is based upon automated glucose measurements obtained from the Nova Bioprofile FLEX® autosampler where cultures were fed to maintain a target glucose level which in turn maintained other nutrients based on a stoichiometric ratio. All of the calculations were done automatically through in-house integration with a Delta V process control system. Through both media and feed strategy optimization, a titer increase from the original platform titer of 5 to 6.3 g/L was achieved for cell line A, and a substantial titer increase of 4 to over 9 g/L was achieved for cell line B with comparable product quality. Glucose was found to be the best feed indicator, but not all cell lines benefited from dynamic feeding and optimized feed media was critical to process improvement. Our work demonstrated that dynamic feeding has the ability to automatically adjust feed rates according to culture behavior, and that the advantage can be best realized during early and rapid process development stages where different cell lines or large changes in culture conditions might lead to dramatically different nutrient demands. Copyright © 2012 Wiley Periodicals, Inc.

  11. A pumpless perfusion cell culture cap with two parallel channel layers keeping the flow rate constant.

    PubMed

    Lee, Dong Woo; Yi, Sang Hyun; Ku, Bosung; Kim, Jhingook

    2012-01-01

    This article presents a novel pumpless perfusion cell culture cap, the gravity-driven flow rate of which is kept constant by the height difference of two parallel channel layers. Previous pumpless perfusion cell culture systems create a gravity-driven flow by means of the hydraulic head difference (Δh) between the source reservoir and the drain reservoir. As more media passes from the source reservoir to the drain reservoir, the source media level decreases and the drain media level increases. Thus, previous works based on a gravity-driven flow were unable to supply a constant flow rate for the perfusion cell culture. However, the proposed perfusion cell culture cap can supply a constant flow rate, because the media level remains unchanged as the media moves laterally through each channel having same media level. In experiments, using the different fluidic resistances, the perfusion cap generated constant flow rates of 871 ± 27 μL h(-1) and 446 ± 11 μL h(-1) . The 871 and 446 μL h(-1) flow rates replace the whole 20 mL medium in the petri dish with a fresh medium for days 1 and 2, respectively. In the perfusion cell (A549 cell line) culture with the 871 μL h(-1) flow rate, the proposed cap can maintain a lactate concentration of about 2200 nmol mL(-1) and an ammonia concentration of about 3200 nmol mL(-1) . Moreover, although the static cell culture maintains cell viability for 5 days, the perfusion cell culture with the 871 μL h(-1) flow rate can maintain cell viability for 9 days. Copyright © 2012 American Institute of Chemical Engineers (AIChE).

  12. Performance of Gram staining on blood cultures flagged negative by an automated blood culture system.

    PubMed

    Peretz, A; Isakovich, N; Pastukh, N; Koifman, A; Glyatman, T; Brodsky, D

    2015-08-01

    Blood is one of the most important specimens sent to a microbiology laboratory for culture. Most blood cultures are incubated for 5-7 days, except in cases where there is a suspicion of infection caused by microorganisms that proliferate slowly, or infections expressed by a small number of bacteria in the bloodstream. Therefore, at the end of incubation, misidentification of positive cultures and false-negative results are a real possibility. The aim of this work was to perform a confirmation by Gram staining of the lack of any microorganisms in blood cultures that were identified as negative by the BACTEC™ FX system at the end of incubation. All bottles defined as negative by the BACTEC FX system were Gram-stained using an automatic device and inoculated on solid growth media. In our work, 15 cultures that were defined as negative by the BACTEC FX system at the end of the incubation were found to contain microorganisms when Gram-stained. The main characteristic of most bacteria and fungi growing in the culture bottles that were defined as negative was slow growth. This finding raises a problematic issue concerning the need to perform Gram staining of all blood cultures, which could overload the routine laboratory work, especially laboratories serving large medical centers and receiving a large number of blood cultures.

  13. A comparison of long-term parallel measurements of sunshine duration obtained with a Campbell-Stokes sunshine recorder and two automated sunshine sensors

    NASA Astrophysics Data System (ADS)

    Baumgartner, D. J.; Pötzi, W.; Freislich, H.; Strutzmann, H.; Veronig, A. M.; Foelsche, U.; Rieder, H. E.

    2017-06-01

    In recent decades, automated sensors for sunshine duration (SD) measurements have been introduced in meteorological networks, thereby replacing traditional instruments, most prominently the Campbell-Stokes (CS) sunshine recorder. Parallel records of automated and traditional SD recording systems are rare. Nevertheless, such records are important to understand the differences/similarities in SD totals obtained with different instruments and how changes in monitoring device type affect the homogeneity of SD records. This study investigates the differences/similarities in parallel SD records obtained with a CS and two automated SD sensors between 2007 and 2016 at the Kanzelhöhe Observatory, Austria. Comparing individual records of daily SD totals, we find differences of both positive and negative sign, with smallest differences between the automated sensors. The larger differences between CS-derived SD totals and those from automated sensors can be attributed (largely) to the higher sensitivity threshold of the CS instrument. Correspondingly, the closest agreement among all sensors is found during summer, the time of year when sensitivity thresholds are least critical. Furthermore, we investigate the performance of various models to create the so-called sensor-type-equivalent (STE) SD records. Our analysis shows that regression models including all available data on daily (or monthly) time scale perform better than simple three- (or four-) point regression models. Despite general good performance, none of the considered regression models (of linear or quadratic form) emerges as the "optimal" model. Although STEs prove useful for relating SD records of individual sensors on daily/monthly time scales, this does not ensure that STE (or joint) records can be used for trend analysis.

  14. Study of Automated Embryo Manipulation Using Dynamic Microarray:Trapping, Culture and Collection

    NASA Astrophysics Data System (ADS)

    Kimura, Hiroshi; Nakamura, Hiroko; Iwai, Kosuke; Yamamoto, Takatoki; Takeuchi, Shoji; Fujii, Teruo; Sakai, Yasuyuki

    Embryo handling is an extremely important fundamental technique in reproductive technology and other related life science discipline. The handling usually requires an artisanal operation that uses a glass-made capillary tube to suck in / out the embryo by applying external pressure with mouth or pipetting, to move it one to another environment and to redeliver into the womb. Because of the delicate operations, it is difficult to obtain quantitative result through the experiments. It is therefore an automatic embryo handling system has been highly desired to obtain stable quantitative results, and to reduce the stress for the operators. In this paper, we proposed and developed an automated embryo culture device, which can make an array of the embryos, culture them to be the blastocyst stage, and collect the blastocyst using the dynamic microarray format that we had studied previously. We preliminary examined the three functions of trapping, culture, and release using a mouse embryo as a sample. As a result, the mouse embryos are successfully trapped and released, whereas the efficiency of the in-device embryo culture was less comparable than the conventional dish culture. The culture stage still needs optimization for embryos, however the concept of embryo manipulation was proofed successfully.

  15. Reduced length of hospital stay through a point of care placed automated blood culture instrument.

    PubMed

    Bruins, M J; Egbers, M J; Israel, T M; Diepeveen, S H A; Wolfhagen, M J H M

    2017-04-01

    Early appropriate antimicrobial treatment of patients with sepsis has a large impact on clinical outcome. To enable prompt and efficient processing of blood cultures, the inoculated vials should be placed into an automated continuously monitoring blood culture system immediately after sampling. We placed an extra BACTEC FX instrument at the emergency department of our hospital and validated the twice-daily re-entering of ongoing vials from this instrument into the BACTEC FX at the laboratory. We subsequently assessed the benefits of shortening the transport time between sampling and monitored incubation of blood culture vials by comparing the turnaround times of positive blood cultures from emergency department patients with a historical control group. Re-entering ongoing vials within 2 h raised no technical problems with the BACTEC FX and did not increase the risk of false-negative culture results. The decreased transport time resulted in significantly earlier available Gram stain results for a large proportion of patients in the intervention group and a significant shortening of the median total turnaround time to less than 48 h. The median length of hospital stay shortened by 1 day. Immediate entering of blood culture vials into a point of care placed BACTEC FX instrument and subsequent efficient processing enables earlier decision-making regarding antimicrobial treatment, preventing the development of antimicrobial resistance and reducing healthcare costs.

  16. Design and Implementation of an Automated Illuminating, Culturing, and Sampling System for Microbial Optogenetic Applications.

    PubMed

    Stewart, Cameron J; McClean, Megan N

    2017-02-19

    Optogenetic systems utilize genetically-encoded proteins that change conformation in response to specific wavelengths of light to alter cellular processes. There is a need for culturing and measuring systems that incorporate programmed illumination and stimulation of optogenetic systems. We present a protocol for building and using a continuous culturing apparatus to illuminate microbial cells with programmed doses of light, and automatically acquire and analyze images of cells in the effluent. The operation of this apparatus as a chemostat allows the growth rate and the cellular environment to be tightly controlled. The effluent of the continuous cell culture is regularly sampled and the cells are imaged by multi-channel microscopy. The culturing, sampling, imaging, and image analysis are fully automated so that dynamic responses in the fluorescence intensity and cellular morphology of cells sampled from the culture effluent are measured over multiple days without user input. We demonstrate the utility of this culturing apparatus by dynamically inducing protein production in a strain of Saccharomyces cerevisiae engineered with an optogenetic system that activates transcription.

  17. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    NASA Technical Reports Server (NTRS)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  18. Automated liquid-liquid extraction workstation for library synthesis and its use in the parallel and chromatography-free synthesis of 2-alkyl-3-alkyl-4-(3H)-quinazolinones.

    PubMed

    Carpintero, Mercedes; Cifuentes, Marta; Ferritto, Rafael; Haro, Rubén; Toledo, Miguel A

    2007-01-01

    An automated liquid-liquid extraction workstation has been developed. This module processes up to 96 samples in an automated and parallel mode avoiding the time-consuming and intensive sample manipulation during the workup process. To validate the workstation, a highly automated and chromatography-free synthesis of differentially substituted quinazolin-4(3H)-ones with two diversity points has been carried out using isatoic anhydride as starting material.

  19. Bacterial and fungal DNA extraction from positive blood culture bottles: a manual and an automated protocol.

    PubMed

    Mäki, Minna

    2015-01-01

    When adapting a gene amplification-based method in a routine sepsis diagnostics using a blood culture sample as a specimen type, a prerequisite for a successful and sensitive downstream analysis is the efficient DNA extraction step. In recent years, a number of in-house and commercial DNA extraction solutions have become available. Careful evaluation in respect to cell wall disruption of various microbes and subsequent recovery of microbial DNA without putative gene amplification inhibitors should be conducted prior selecting the most feasible DNA extraction solution for the downstream analysis used. Since gene amplification technologies have been developed to be highly sensitive for a broad range of microbial species, it is also important to confirm that the used sample preparation reagents and materials are bioburden-free to avoid any risks for false-positive result reporting or interference of the diagnostic process. Here, one manual and one automated DNA extraction system feasible for blood culture samples are described.

  20. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  1. Repeated stimulation of cultured networks of rat cortical neurons induces parallel memory traces

    PubMed Central

    Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and small-scale computational models to study the effect of memory replay on the formation of memory traces. We show that input-deprived networks develop an activity⇔connectivity balance where dominant activity patterns support current connectivity. Electrical stimulation at one electrode disturbs this balance and induces connectivity changes. Intrinsic forces in recurrent networks lead to a new equilibrium with activity patterns that include the stimulus response. The new connectivity is no longer disrupted by this stimulus, indicating that networks memorize it. A different stimulus again induces connectivity changes upon first application but not subsequently, demonstrating the formation of a second memory trace. Returning to the first stimulus does not affect connectivity, indicating parallel storage of both traces. A computer model robustly reproduced experimental results, suggesting that spike-timing-dependent plasticity and short time depression suffice to store parallel memory traces, even in networks without particular circuitry constraints. PMID:26572650

  2. Conventional versus automated measurement of blood pressure in primary care patients with systolic hypertension: randomised parallel design controlled trial

    PubMed Central

    Godwin, Marshall; Dawes, Martin; Kiss, Alexander; Tobe, Sheldon W; Grant, F Curry; Kaczorowski, Janusz

    2011-01-01

    Objective To compare the quality and accuracy of manual office blood pressure and automated office blood pressure using the awake ambulatory blood pressure as a gold standard. Design Multi-site cluster randomised controlled trial. Setting Primary care practices in five cities in eastern Canada. Participants 555 patients with systolic hypertension and no serious comorbidities under the care of 88 primary care physicians in 67 practices in the community. Interventions Practices were randomly allocated to either ongoing use of manual office blood pressure (control group) or automated office blood pressure (intervention group) using the BpTRU device. The last routine manual office blood pressure (mm Hg) was obtained from each patient’s medical record before enrolment. Office blood pressure readings were compared before and after enrolment in the intervention and control groups; all readings were also compared with the awake ambulatory blood pressure. Main outcome measure Difference in systolic blood pressure between awake ambulatory blood pressure minus automated office blood pressure and awake ambulatory blood pressure minus manual office blood pressure. Results Cluster randomisation allocated 31 practices (252 patients) to manual office blood pressure and 36 practices (303 patients) to automated office blood pressure measurement. The most recent routine manual office blood pressure (149.5 (SD 10.8)/81.4 (8.3)) was higher than automated office blood pressure (135.6 (17.3)/77.7 (10.9)) (P<0.001). In the control group, routine manual office blood pressure before enrolment (149.9 (10.7)/81.8 (8.5)) was reduced to 141.4 (14.6)/80.2 (9.5) after enrolment (P<0.001/P=0.01), but the reduction in the intervention group from manual office to automated office blood pressure was significantly greater (P<0.001/P=0.02). On the first study visit after enrolment, the estimated mean difference for the intervention group between the awake ambulatory systolic/diastolic blood pressure

  3. Quantification of Dynamic Morphological Drug Responses in 3D Organotypic Cell Cultures by Automated Image Analysis

    PubMed Central

    Härmä, Ville; Schukov, Hannu-Pekka; Happonen, Antti; Ahonen, Ilmari; Virtanen, Johannes; Siitari, Harri; Åkerfelt, Malin; Lötjönen, Jyrki; Nees, Matthias

    2014-01-01

    Glandular epithelial cells differentiate into complex multicellular or acinar structures, when embedded in three-dimensional (3D) extracellular matrix. The spectrum of different multicellular morphologies formed in 3D is a sensitive indicator for the differentiation potential of normal, non-transformed cells compared to different stages of malignant progression. In addition, single cells or cell aggregates may actively invade the matrix, utilizing epithelial, mesenchymal or mixed modes of motility. Dynamic phenotypic changes involved in 3D tumor cell invasion are sensitive to specific small-molecule inhibitors that target the actin cytoskeleton. We have used a panel of inhibitors to demonstrate the power of automated image analysis as a phenotypic or morphometric readout in cell-based assays. We introduce a streamlined stand-alone software solution that supports large-scale high-content screens, based on complex and organotypic cultures. AMIDA (Automated Morphometric Image Data Analysis) allows quantitative measurements of large numbers of images and structures, with a multitude of different spheroid shapes, sizes, and textures. AMIDA supports an automated workflow, and can be combined with quality control and statistical tools for data interpretation and visualization. We have used a representative panel of 12 prostate and breast cancer lines that display a broad spectrum of different spheroid morphologies and modes of invasion, challenged by a library of 19 direct or indirect modulators of the actin cytoskeleton which induce systematic changes in spheroid morphology and differentiation versus invasion. These results were independently validated by 2D proliferation, apoptosis and cell motility assays. We identified three drugs that primarily attenuated the invasion and formation of invasive processes in 3D, without affecting proliferation or apoptosis. Two of these compounds block Rac signalling, one affects cellular cAMP/cGMP accumulation. Our approach supports

  4. Evaluation of a Multi-Parameter Sensor for Automated, Continuous Cell Culture Monitoring in Bioreactors

    NASA Technical Reports Server (NTRS)

    Pappas, D.; Jeevarajan, A.; Anderson, M. M.

    2004-01-01

    Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments in microgravity. Measurement of cell culture medium allows for the optirn.jzation of culture conditions on orbit to maximize cell growth and minimize unnecessary exchange of medium. While several discrete sensors exist to measure culture health, a multi-parameter sensor would simplify the experimental apparatus. One such sensor, the Paratrend 7, consists of three optical fibers for measuring pH, dissolved oxygen (p02), dissolved carbon dioxide (pC02) , and a thermocouple to measure temperature. The sensor bundle was designed for intra-arterial placement in clinical patients, and potentially can be used in NASA's Space Shuttle and International Space Station biotechnology program bioreactors. Methods: A Paratrend 7 sensor was placed at the outlet of a rotating-wall perfused vessel bioreactor system inoculated with BHK-21 (baby hamster kidney) cells. Cell culture medium (GTSF-2, composed of 40% minimum essential medium, 60% L-15 Leibovitz medium) was manually measured using a bench top blood gas analyzer (BGA, Ciba-Corning). Results: A Paratrend 7 sensor was used over a long-term (>120 day) cell culture experiment. The sensor was able to track changes in cell medium pH, p02, and pC02 due to the consumption of nutrients by the BHK-21. When compared to manually obtained BGA measurements, the sensor had good agreement for pH, p02, and pC02 with bias [and precision] of 0.02 [0.15], 1 mm Hg [18 mm Hg], and -4.0 mm Hg [8.0 mm Hg] respectively. The Paratrend oxygen sensor was recalibrated (offset) periodically due to drift. The bias for the raw (no offset or recalibration) oxygen measurements was 42 mm Hg [38 mm Hg]. The measured response (rise) time of the sensor was 20 +/- 4s for pH, 81 +/- 53s for pC02, 51 +/- 20s for p02. For long-term cell culture measurements, these response times are more than adequate. Based on these findings , the Paratrend sensor could

  5. A landscape lake flow pattern design approach based on automated CFD simulation and parallel multiple objective optimization.

    PubMed

    Guo, Hao; Tian, Yimei; Shen, Hailiang; Wang, Yi; Kang, Mengxin

    A design approach for determining the optimal flow pattern in a landscape lake is proposed based on FLUENT simulation, multiple objective optimization, and parallel computing. This paper formulates the design into a multi-objective optimization problem, with lake circulation effects and operation cost as two objectives, and solves the optimization problem with non-dominated sorting genetic algorithm II. The lake flow pattern is modelled in FLUENT. The parallelization aims at multiple FLUENT instance runs, which is different from the FLUENT internal parallel solver. This approach: (1) proposes lake flow pattern metrics, i.e. weighted average water flow velocity, water volume percentage of low flow velocity, and variance of flow velocity, (2) defines user defined functions for boundary setting, objective and constraints calculation, and (3) parallels the execution of multiple FLUENT instances runs to significantly reduce the optimization wall-clock time. The proposed approach is demonstrated through a case study for Meijiang Lake in Tianjin, China.

  6. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    PubMed Central

    Ozcan, Burcin; Negi, Pooran; Laezza, Fernanda; Papadakis, Manos; Labate, Demetrio

    2015-01-01

    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i) image segmentation based on support vector machines with specially designed multiscale filters; (ii) soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma’s surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications. PMID:25853656

  7. PetriJet Platform Technology: An Automated Platform for Culture Dish Handling and Monitoring of the Contents.

    PubMed

    Vogel, Mathias; Boschke, Elke; Bley, Thomas; Lenk, Felix

    2015-08-01

    Due to the size of the required equipment, automated laboratory systems are often unavailable or impractical for use in small- and mid-sized laboratories. However, recent developments in automation engineering provide endless possibilities for incorporating benchtop devices. Here, the authors describe the development of a platform technology to handle sealed culture dishes. The programming is based on the Petri net method and implemented via Codesys V3.5 pbF. The authors developed a system of three independent electrical driven axes capable of handling sealed culture dishes. The device performs two difference processes. First, it automatically obtains an image of every processed culture dish. Second, a server-based image analysis algorithm provides the user with several parameters of the cultivated sample on the culture dish. For demonstration purposes, the authors developed a continuous, systematic, nondestructive, and quantitative method for monitoring the growth of a hairy root culture. New results can be displayed with respect to the previous images. This system is highly accurate, and the results can be used to simulate the growth of biological cultures. The authors believe that the innovative features of this platform can be implemented, for example, in the food industry, clinical environments, and research laboratories. © 2015 Society for Laboratory Automation and Screening.

  8. Parallel Measurement of Circadian Clock Gene Expression and Hormone Secretion in Human Primary Cell Cultures.

    PubMed

    Petrenko, Volodymyr; Saini, Camille; Perrin, Laurent; Dibner, Charna

    2016-11-11

    Circadian clocks are functional in all light-sensitive organisms, allowing for an adaptation to the external world by anticipating daily environmental changes. Considerable progress in our understanding of the tight connection between the circadian clock and most aspects of physiology has been made in the field over the last decade. However, unraveling the molecular basis that underlies the function of the circadian oscillator in humans stays of highest technical challenge. Here, we provide a detailed description of an experimental approach for long-term (2-5 days) bioluminescence recording and outflow medium collection in cultured human primary cells. For this purpose, we have transduced primary cells with a lentiviral luciferase reporter that is under control of a core clock gene promoter, which allows for the parallel assessment of hormone secretion and circadian bioluminescence. Furthermore, we describe the conditions for disrupting the circadian clock in primary human cells by transfecting siRNA targeting CLOCK. Our results on the circadian regulation of insulin secretion by human pancreatic islets, and myokine secretion by human skeletal muscle cells, are presented here to illustrate the application of this methodology. These settings can be used to study the molecular makeup of human peripheral clocks and to analyze their functional impact on primary cells under physiological or pathophysiological conditions.

  9. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    PubMed

    Ker, Dai Fei Elmer; Weiss, Lee E; Junkers, Silvina N; Chen, Mei; Yin, Zhaozheng; Sandbothe, Michael F; Huh, Seung-il; Eom, Sungeun; Bise, Ryoma; Osuna-Highley, Elvira; Kanade, Takeo; Campbell, Phil G

    2011-01-01

    Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and developing robotic cell

  10. Integrated microdevice for long-term automated perfusion culture without shear stress and real-time electrochemical monitoring of cells.

    PubMed

    Li, Lin-Mei; Wang, Wei; Zhang, Shu-Hui; Chen, Shi-Jing; Guo, Shi-Shang; Français, Olivier; Cheng, Jie-Ke; Huang, Wei-Hua

    2011-12-15

    Electrochemical techniques based on ultramicroelectrodes (UMEs) play a significant role in real-time monitoring of chemical messengers' release from single cells. Conversely, precise monitoring of cells in vitro strongly depends on the adequate construction of cellular physiological microenvironment. In this paper, we developed a multilayer microdevice which integrated high aspect ratio poly(dimethylsiloxane) (PDMS) microfluidic device for long-term automated perfusion culture of cells without shear stress and an independently addressable microelectrodes array (IAMEA) for electrochemical monitoring of the cultured cells in real time. Novel design using high aspect ratio between circular "moat" and ring-shaped micropillar array surrounding cell culture chamber combined with automated "circular-centre" and "bottom-up" perfusion model successfully provided continuous fresh medium and a stable and uniform microenvironment for cells. Two weeks automated culture of human umbilical endothelial cell line (ECV304) and neuronal differentiation of rat pheochromocytoma (PC12) cells have been realized using this device. Furthermore, the quantal release of dopamine from individual PC12 cells during their culture or propagation process was amperometrically monitored in real time. The multifunctional microdevice developed in this paper integrated cellular microenvironment construction and real-time monitoring of cells during their physiological process, and would possibly provide a versatile platform for cell-based biomedical analysis.

  11. PCR evaluation of false-positive signals from two automated blood-culture systems.

    PubMed

    Karahan, Z Ceren; Mumcuoglu, Ipek; Guriz, Haluk; Tamer, Deniz; Balaban, Neriman; Aysev, Derya; Akar, Nejat

    2006-01-01

    Rapid detection of micro-organisms from blood is one of the most critical functions of a diagnostic microbiology laboratory. Automated blood-culture systems reduce the time needed to detect positive cultures, and reduce specimen handling. The false-positive rate of such systems is 1-10%. In this study, the presence of pathogens in 'false-positive' bottles obtained from BACTEC 9050 (Becton Dickinson) and BacT/Alert (Biomérieux) systems was investigated by eubacterial and fungal PCR. A total of 169 subculture-negative aerobic blood-culture bottles (104 BacT/Alert and 65 BACTEC) were evaluated. Both fungal and eubacterial PCRs were negative for all BACTEC bottles. Fungal PCR was also negative for the BacT/Alert system, but 10 bottles (9.6%) gave positive results by eubacterial PCR. Sequence analysis of the positive PCR amplicons indicated the presence of the following bacteria (number of isolates in parentheses): Pasteurella multocida (1), Staphylococcus epidermidis (2), Staphylococcus hominis (1), Micrococcus sp. (1), Streptococcus pneumoniae (1), Corynebacterium spp. (2), Brachibacterium sp. (1) and Arthrobacter/Rothia sp. (1). Antibiotic usage by the patients may be responsible for the inability of the laboratory to grow these bacteria on subcultures. For patients with more than one false-positive bottle, molecular methods can be used to evaluate the microbial DNA in these bottles. False positives from the BACTEC system may be due to elevated patient leukocyte counts or the high sensitivity of the system to background increases in CO(2) concentration.

  12. An automated laboratory-scale methodology for the generation of sheared mammalian cell culture samples.

    PubMed

    Joseph, Adrian; Goldrick, Stephen; Mollet, Michael; Turner, Richard; Bender, Jean; Gruber, David; Farid, Suzanne S; Titchener-Hooker, Nigel

    2017-05-01

    Continuous disk-stack centrifugation is typically used for the removal of cells and cellular debris from mammalian cell culture broths at manufacturing-scale. The use of scale-down methods to characterise disk-stack centrifugation performance enables substantial reductions in material requirements and allows a much wider design space to be tested than is currently possible at pilot-scale. The process of scaling down centrifugation has historically been challenging due to the difficulties in mimicking the Energy Dissipation Rates (EDRs) in typical machines. This paper describes an alternative and easy-to-assemble automated capillary-based methodology to generate levels of EDRs consistent with those found in a continuous disk-stack centrifuge. Variations in EDR were achieved through changes in capillary internal diameter and the flow rate of operation through the capillary. The EDRs found to match the levels of shear in the feed zone of a pilot-scale centrifuge using the experimental method developed in this paper (2.4×10(5) W/Kg) are consistent with those obtained through previously published computational fluid dynamic (CFD) studies (2.0×10(5) W/Kg). Furthermore, this methodology can be incorporated into existing scale-down methods to model the process performance of continuous disk-stack centrifuges. This was demonstrated through the characterisation of culture hold time, culture temperature and EDRs on centrate quality. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. The Zymark BenchMate™. A compact, fully-automated solution-phase reaction work-up facility for multiple parallel synthesis

    PubMed Central

    Hamlin, Gordon A.

    2000-01-01

    The rapid growth of multiple parallel synthesis in our laboratories has created a demand for a robust, easily accessed automated system for solution-phase reaction work-up, since the manual work-up of large numbers of small-scale reactions is both time-consuming and tedious, and is a rate limiting step in the generation of large numbers of compounds for test. Work-up in chemical organic synthesis consists of a series of post-reaction operations designed using differential chemical properties to remove excess reagent or starting material, reagent products and, where possible reaction by-products. Careful consideration of post-reaction operations as a clean-up step can obviate the requirement for purification. Generally, work-up can be resolved into four operations: filtration, solvent addition (dilution, trituration), washing and separation (partition) and it is the selection and ordering of these four basic operations that constitutes a chemical work-up. Following the proven success of centralized Zymate robotic systems in the compilation, execution and work-up of complex reaction sequences, a centralized chemical work-up service has been in operation for over 12 months. It now seemed prudent that the needs of multiple parallel synthesis would be better served by the development of a compact, automated system, capable of operating in a standard chemistry laboratory fume-hood. A Zymark BenchMate platform has been configured to perform the four basic operations of chemical solution work-up. A custom-built filtration station, incorporating an integrated tipping facility for the sample tube has also been developed. Compilation of each work-up is through a set of Visual Basic procedure screens, each dedicated to a particular work-up scenario. Methods are compiled at the chemist's own PC and transferred to the BenchMate via a diskette. PMID:18924692

  14. Automated direct screening for resistance of Gram-negative blood cultures using the BD Kiestra WorkCell.

    PubMed

    Heather, C S; Maley, M

    2017-10-02

    Early detection of resistance in sepsis due to Gram-negative organisms may lead to improved outcomes by reducing the time to effective antibiotic therapy. Traditional methods of resistance detection require incubation times of 18 to 48 h to detect resistance. We have utilised automated specimen processing, digital imaging and zone size measurements in conjunction with direct disc susceptibility testing to develop a method for the rapid screening of Gram-negative blood culture isolates for resistance. Positive clinical blood cultures with Gram-negative organisms were prospectively identified and additional resistant mock specimens were prepared. Broth was plated and antibiotic-impregnated discs (ampicillin, ceftriaxone, piperacillin-tazobactam, meropenem, ciprofloxacin, gentamicin) were added. Plates were incubated, digitally imaged and zone sizes were measured using the BD Kiestra WorkCell laboratory automation system. Minimum, clinically useful, incubation times and optimised zone size cut-offs for resistance detection were determined. We included 187 blood cultures in the study. At 5 h of incubation, > 90% of plates yielded interpretable results. Using optimised zone size cut-offs, the sensitivity for resistance detection ranged from 87 to 100%, while the specificity ranged from 84.7 to 100%. The sensitivity and specificity for piperacillin-tazobactam resistance detection was consistently worse than for the other agents. Automated direct disc susceptibility screening is a rapid and sensitive tool for resistance detection in Gram-negative isolates from blood cultures for most of the agents tested.

  15. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    NASA Astrophysics Data System (ADS)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  16. Automated culture system experiments hardware: developing test results and design solutions.

    PubMed

    Freddi, M; Covini, M; Tenconi, C; Ricci, C; Caprioli, M; Cotronei, V

    2002-07-01

    The experiment proposed by Prof. Ricci University of Milan is funded by ASI with Laben as industrial Prime Contractor. ACS-EH (Automated Culture System-Experiment Hardware) will support the multigenerational experiment on weightlessness with rotifers and nematodes within four Experiment Containers (ECs) located inside the European Modular Cultivation System (EMCS) facility..Actually the Phase B is in progress and a concept design solution has been defined. The most challenging aspects for the design of such hardware are, from biological point of view the provision of an environment which permits animal's survival and to maintain desiccated generations separated and from the technical point of view, the miniaturisation of the hardware itself due to the reduce EC provided volume (160mmx60mmx60mm). The miniaturisation will allow a better use of the available EMCS Facility resources (e.g. volume. power etc.) and to fulfil the experiment requirements. ACS-EH, will be ready to fly in the year 2005 on boar the ISS.

  17. [Case report: positive signals from automated blood-culture system with negative direct Gram examination].

    PubMed

    Cecille, A; Garcia, B; Abi Khalil, C; Iranzo, A; Azencott, N

    2007-12-01

    Detection of positive haemoculture is usually managed by an automated system. When a bottle is detected positive but that the Gram coloration does not reveal germs by direct examination, transfer onto chocolate blood agar generally allows to confirm or infirm bacteraemia. In light of a case of Fusobacterium nucleatum bacteraemia, we discuss the opportunity of pairing it with an enrichment broth. M. N, hospitalized in the hepatogastroenterology department, runs a fever of undetermined origin. Three pairs of blood samples are collected on May 7th, 2004, another pair on May 9th, 2004 and a last pair on May 10th, 2004. They are incubated in a Bactec 9120 analyzer. A positive signal is detected in the two last anaerobic haemocultures pairs after four days of incubation, but in both cases, the Gram coloration does not bring germs to light. A systematic transfer of the broth onto chocolate blood agar with incubation under CO2 enriched atmosphere and anaerobiosis is carried out. After 24 hours, the solid media remain sterile. The samples found positive by the Bactec(TM) are then transferred onto Schaedler broth in order to favour a potential growth of fastidious germs. The culture will prove to be positive only in this enrichment medium, allowing the identification of F. nucleatum. An hepatic abscess will then be revealed in the patient. It thus appears judicious to associate an enrichment medium with transplanted solid medium when the context is evocative of a real infection (clinic, positivity delays...).

  18. Validation of shortened 2-day sterility testing of mesenchymal stem cell-based therapeutic preparation on an automated culture system.

    PubMed

    Lysák, Daniel; Holubová, Monika; Bergerová, Tamara; Vávrová, Monika; Cangemi, Giuseppina Cristina; Ciccocioppo, Rachele; Kruzliak, Peter; Jindra, Pavel

    2016-03-01

    Cell therapy products represent a new trend of treatment in the field of immunotherapy and regenerative medicine. Their biological nature and multistep preparation procedure require the application of complex release criteria and quality control. Microbial contamination of cell therapy products is a potential source of morbidity in recipients. The automated blood culture systems are widely used for the detection of microorganisms in cell therapy products. However the standard 2-week cultivation period is too long for some cell-based treatments and alternative methods have to be devised. We tried to verify whether a shortened cultivation of the supernatant from the mesenchymal stem cell (MSC) culture obtained 2 days before the cell harvest could sufficiently detect microbial growth and allow the release of MSC for clinical application. We compared the standard Ph. Eur. cultivation method and the automated blood culture system BACTEC (Becton Dickinson). The time to detection (TTD) and the detection limit were analyzed for three bacterial and two fungal strains. The Staphylococcus aureus and Pseudomonas aeruginosa were recognized within 24 h with both methods (detection limit ~10 CFU). The time required for the detection of Bacillus subtilis was shorter with the automated method (TTD 10.3 vs. 60 h for 10-100 CFU). The BACTEC system reached significantly shorter times to the detection of Candida albicans and Aspergillus brasiliensis growth compared to the classical method (15.5 vs. 48 and 31.5 vs. 48 h, respectively; 10-100 CFU). The positivity was demonstrated within 48 h in all bottles, regardless of the size of the inoculum. This study validated the automated cultivation system as a method able to detect all tested microorganisms within a 48-h period with a detection limit of ~10 CFU. Only in case of B. subtilis, the lowest inoculum (~10 CFU) was not recognized. The 2-day cultivation technique is then capable of confirming the microbiological safety of MSC and

  19. Semi-automated relative quantification of cell culture contamination with mycoplasma by Photoshop-based image analysis on immunofluorescence preparations.

    PubMed

    Kumar, Ashok; Yerneni, Lakshmana K

    2009-01-01

    Mycoplasma contamination in cell culture is a serious setback for the cell-culturist. The experiments undertaken using contaminated cell cultures are known to yield unreliable or false results due to various morphological, biochemical and genetic effects. Earlier surveys revealed incidences of mycoplasma contamination in cell cultures to range from 15 to 80%. Out of a vast array of methods for detecting mycoplasma in cell culture, the cytological methods directly demonstrate the contaminating organism present in association with the cultured cells. In this investigation, we report the adoption of a cytological immunofluorescence assay (IFA), in an attempt to obtain a semi-automated relative quantification of contamination by employing the user-friendly Photoshop-based image analysis. The study performed on 77 cell cultures randomly collected from various laboratories revealed mycoplasma contamination in 18 cell cultures simultaneously by IFA and Hoechst DNA fluorochrome staining methods. It was observed that the Photoshop-based image analysis on IFA stained slides was very valuable as a sensitive tool in providing quantitative assessment on the extent of contamination both per se and in comparison to cellularity of cell cultures. The technique could be useful in estimating the efficacy of anti-mycoplasma agents during decontaminating measures.

  20. Corneal regeneration by transplantation of corneal epithelial cell sheets fabricated with automated cell culture system in rabbit model.

    PubMed

    Kobayashi, Toyoshige; Kan, Kazutoshi; Nishida, Kohji; Yamato, Masayuki; Okano, Teruo

    2013-12-01

    We have performed clinical applications of cell sheet-based regenerative medicine with human patients in several fields. In order to achieve the mass production of transplantable cell sheets, we have developed automated cell culture systems. Here, we report an automated robotic system utilizing a cell culture vessel, cell cartridge. The cell cartridge had two rooms for epithelial cells and feeder layer cells separating by porous membrane on which a temperature-responsive polymer was covalently immobilized. After pouring cells into this robotic system, cell seeding, medium change, and microscopic examination during culture were automatically performed according to the computer program. Transplantable corneal epithelial cell sheets were successfully fabricated in cell cartridges with this robotic system. Then, fabricated cell sheets were transplanted onto ocular surfaces of rabbit limbal epithelial stem cell deficiency model after 6-h transportation using a portable homothermal container to keep inner temperature at 36 °C. Within one week after transplantation, normal corneal epithelium was successfully regenerated. This automatic cell culture system would be useful for industrialization of tissue-engineered products for regenerative medicine.

  1. Single cell-based automated quantification of therapy responses of invasive cancer spheroids in organotypic 3D culture.

    PubMed

    Veelken, Cornelia; Bakker, Gert-Jan; Drell, David; Friedl, Peter

    2017-09-01

    Organotypic in vitro culture of 3D spheroids in an extracellular matrix represent a promising cancer therapy prediction model for personalized medicine screens due to their controlled experimental conditions and physiological similarities to in vivo conditions. As in tumors in vivo, 3D invasion cultures identify intratumor heterogeneity of growth, invasion and apoptosis induction by cytotoxic therapy. We here combine in vitro 3D spheroid invasion culture with irradiation and automated nucleus-based segmentation for single cell analysis to quantify growth, survival, apoptosis and invasion response during experimental radiation therapy. As output, multi-parameter histogram-based representations deliver an integrated insight into therapy response and resistance. This workflow may be suited for high-throughput screening and identification of invasive and therapy-resistant tumor sub-populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Automated recombinant protein expression screening in Escherichia coli.

    PubMed

    Busso, Didier; Stierlé, Matthieu; Thierry, Jean-Claude; Moras, Dino

    2008-01-01

    To fit the requirements of structural genomics programs, new as well as classical methods have been adapted to automation. This chapter describes the automated procedure developed within the Structural Biology and Genomics Platform, Strasbourg for performing recombinant protein expression screening in Escherichia coli. The procedure consists of parallel competent cells transformation, cell plating, and liquid culture inoculation, implemented for up to 96 samples at a time.

  3. Improved standardization and potential for shortened time to results with BD Kiestra™ total laboratory automation of early urine cultures: A prospective comparison with manual processing.

    PubMed

    Graham, Maryza; Tilson, Leanne; Streitberg, Richard; Hamblin, John; Korman, Tony M

    2016-09-01

    We compared the results of 505 urine specimens prospectively processed by both conventional manual processing (MP) with 16-24h incubation to BD Kiestra™ Total Laboratory Automation (TLA) system with a shortened incubation of 14h: 97% of culture results were clinically concordant. TLA processing was associated with improved standardization of time of first culture reading and total incubation time.

  4. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    ERIC Educational Resources Information Center

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  5. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    ERIC Educational Resources Information Center

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  6. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    PubMed Central

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-01

    Background Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at . This site also provides detailed instructions for

  7. Parallel factor analysis (PARAFAC) of target analytes in GC x GC-TOFMS data: automated selection of a model with an appropriate number of factors.

    PubMed

    Hoggard, Jamin C; Synovec, Robert E

    2007-02-15

    PARAFAC (parallel factor analysis) is a powerful chemometric method that has been demonstrated as a useful deconvolution technique in dealing with data obtained using comprehensive two-dimensional gas chromatography combined with time-of-flight mass spectrometry (GC x GC-TOFMS). However, selection of a PARAFAC model having an appropriate number of factors can be challenging, especially at low S/N or for analytes in the presence of chromatographic and spectral overlapping compounds (interferences). Herein, we present a method for the automated selection of a PARAFAC model with an appropriate number of factors in GC x GC-TOFMS data, demonstrated for a target analyte of interest. The approach taken in the methodology is as follows. PARAFAC models are automatically generated having an incrementally higher number of factors until mass spectral matching of the corresponding loadings in the model against a target analyte mass spectrum indicates overfitting has occurred. Then, the model selected simply has one less factor than the overfit model. Results indicate this model selection approach is viable across the detection range of the instrument from overloaded analyte signal down to low S/N analyte signal (total ion current signal intensity at analyte peak maximum S/N < 1). While the methodology is generally applicable to comprehensive two-dimensional separations using multichannel spectral detection, we evaluated it with several target analytes using GC x GC-TOFMS. For brevity in this report, only results for bromobenzene as target analyte are presented. Alternatively, instead of using the model with one less factor than the overfit model, one can select the model with the highest mass spectral match for the target analyte from among all the models generated (excluding the overfit model). Both model selection approaches gave essentially identical results.

  8. Quantitative high-throughput population dynamics in continuous-culture by automated microscopy

    PubMed Central

    Merritt, Jason; Kuehn, Seppe

    2016-01-01

    We present a high-throughput method to measure abundance dynamics in microbial communities sustained in continuous-culture. Our method uses custom epi-fluorescence microscopes to automatically image single cells drawn from a continuously-cultured population while precisely controlling culture conditions. For clonal populations of Escherichia coli our instrument reveals history-dependent resilience and growth rate dependent aggregation. PMID:27616752

  9. Diagnostic accuracy of uriSed automated urine microscopic sediment analyzer and dipstick parameters in predicting urine culture test results.

    PubMed

    Huysal, Kağan; Budak, Yasemin U; Karaca, Ayse Ulusoy; Aydos, Murat; Kahvecioğlu, Serdar; Bulut, Mehtap; Polat, Murat

    2013-01-01

    Urinary tract infection (UTI) is one of the most common types of infection. Currently, diagnosis is primarily based on microbiologic culture, which is time- and labor-consuming. The aim of this study was to assess the diagnostic accuracy of urinalysis results from UriSed (77 Electronica, Budapest, Hungary), an automated microscopic image-based sediment analyzer, in predicting positive urine cultures. We examined a total of 384 urine specimens from hospitalized patients and outpatients attending our hospital on the same day for urinalysis, dipstick tests and semi-quantitative urine culture. The urinalysis results were compared with those of conventional semiquantitative urine culture. Of 384 urinary specimens, 68 were positive for bacteriuria by culture, and were thus considered true positives. Comparison of these results with those obtained from the UriSed analyzer indicated that the analyzer had a specificity of 91.1%, a sensitivity of 47.0%, a positive predictive value (PPV) of 53.3% (95% confidence interval (CI) = 40.8-65.3), and a negative predictive value (NPV) of 88.8% (95% CI = 85.0-91.8%). The accuracy was 83.3% when the urine leukocyte parameter was used, 76.8% when bacteriuria analysis of urinary sediment was used, and 85.1% when the bacteriuria and leukocyturia parameters were combined. The presence of nitrite was the best indicator of culture positivity (99.3% specificity) but had a negative likelihood ratio of 0.7, indicating that it was not a reliable clinical test. Although the specificity of the UriSed analyzer was within acceptable limits, the sensitivity value was low. Thus, UriSed urinalysis resuIts do not accurately predict the outcome of culture.

  10. An automated robotic platform for rapid profiling oligosaccharide analysis of monoclonal antibodies directly from cell culture.

    PubMed

    Doherty, Margaret; Bones, Jonathan; McLoughlin, Niaobh; Telford, Jayne E; Harmon, Bryan; DeFelippis, Michael R; Rudd, Pauline M

    2013-11-01

    Oligosaccharides attached to Asn297 in each of the CH2 domains of monoclonal antibodies play an important role in antibody effector functions by modulating the affinity of interaction with Fc receptors displayed on cells of the innate immune system. Rapid, detailed, and quantitative N-glycan analysis is required at all stages of bioprocess development to ensure the safety and efficacy of the therapeutic. The high sample numbers generated during quality by design (QbD) and process analytical technology (PAT) create a demand for high-performance, high-throughput analytical technologies for comprehensive oligosaccharide analysis. We have developed an automated 96-well plate-based sample preparation platform for high-throughput N-glycan analysis using a liquid handling robotic system. Complete process automation includes monoclonal antibody (mAb) purification directly from bioreactor media, glycan release, fluorescent labeling, purification, and subsequent ultra-performance liquid chromatography (UPLC) analysis. The entire sample preparation and commencement of analysis is achieved within a 5-h timeframe. The automated sample preparation platform can easily be interfaced with other downstream analytical technologies, including mass spectrometry (MS) and capillary electrophoresis (CE), for rapid characterization of oligosaccharides present on therapeutic antibodies. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    NASA Astrophysics Data System (ADS)

    Kim, Youn-Kyu; Park, Seul-Hyun; Lee, Joo-Hee; Choi, Gi-Hyuk

    2015-03-01

    In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS). The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  12. Roadmap to approval: use of an automated sterility test method as a lot release test for Carticel, autologous cultured chondrocytes.

    PubMed

    Kielpinski, G; Prinzi, S; Duguid, J; du Moulin, G

    2005-01-01

    In February 2004, FDA approved a supplement to our biologics license for Carticel, autologous cultured chondrocytes, to use the BacT/ALERT microbial detection system as an alternative to the compendial sterility test for lot release. This article provides a roadmap to our approval process. The approval represents more than 4 years of development and validation studies comparing the Steritest compact system to the BacT/ALERT microbial detection system. For this study, freshly cultured chondrocytes were prepared from a characterized cell bank. Microbial isolates were prepared from either American Type Culture Collection (ATCC) strains or from in-house contaminants. For each test condition, a suspension of chondrocyte cells and test organisms was inoculated into both aerobic media (SA standard adult culture bottles, FA FAN, tryptic soy broth) and anaerobic media (SN standard adult culture bottles, FN FAN, fluid thioglycollate media) and tested for sterility using the Steritest compact system (Millipore, Bedford, MA, USA) and the BacT/ALERT microbial detection system (bioMerieux, Durham, NC, USA). Negative control bottles were inoculated with chondrocytes and no microorganisms. All bottles were incubated for 14 days and read daily. Bacterial growth was determined by either visual examination of Steritest canisters or detection of a positive by the BacT/ALERT system. A gram stain and streak plate were used to confirm positive bottles and negative bottles after 14 days. The detection of a positive by either the Steritest compact system or the BacT/ALERT system was summarized for each organism in each validation study. Data generated from studies reducing the incubation temperature from 35 degrees C to 32 degrees C improved detection times in the automated method compared with the compendial method. Other improvements included the use of FAN aerobic and anaerobic media to absorb the gentamicin contained in the culture media of prepared chondrocyte samples. Chondrocytes

  13. Fully-automated roller bottle handling system for large scale culture of mammalian cells.

    PubMed

    Kunitake, R; Suzuki, A; Ichihashi, H; Matsuda, S; Hirai, O; Morimoto, K

    1997-01-20

    A fully automatic and continuous cell culture system based on roller bottles is described in this paper. The system includes a culture rack storage station for storing a large number of roller bottles filled with culture medium and inoculated with mammalian cells, mass-handling facility for extracting completed cultures from the roller bottles, and replacing the culture medium. The various component units of the system were controlled either by a general-purpose programmable logic controller or a dedicated controller. The system provided four subsequent operation modes: cell inoculation, medium change, harvesting, and medium change. The operator could easily select and change the appropriate mode from outside of the aseptic area. The development of the system made large-scale production of mammalian cells, and manufacturing and stabilization of high quality products such as erythropoietin possible under total aseptic control, and opened up the door for industrial production of physiologically active substances as pharmaceutical drugs by mammalian cell culture.

  14. Surveillance cultures of samples obtained from biopsy channels and automated endoscope reprocessors after high-level disinfection of gastrointestinal endoscopes

    PubMed Central

    2012-01-01

    Background The instrument channels of gastrointestinal (GI) endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD). The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs) for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs) of GI endoscopes and the internal surfaces of AERs. Methods We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. Results The number of culture-positive samples obtained from BCs (13.6%, 57/420) was significantly higher than that obtained from AERs (1.7%, 7/420). In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300) and colonoscopes (20.8%, 25/120) were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300) and AER reprocess to colonoscopes (0.8%, 1/120). Conclusions Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not. PMID:22943739

  15. Surveillance cultures of samples obtained from biopsy channels and automated endoscope reprocessors after high-level disinfection of gastrointestinal endoscopes.

    PubMed

    Chiu, King-Wah; Tsai, Ming-Chao; Wu, Keng-Liang; Chiu, Yi-Chun; Lin, Ming-Tzung; Hu, Tsung-Hui

    2012-09-03

    The instrument channels of gastrointestinal (GI) endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD). The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs) for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs) of GI endoscopes and the internal surfaces of AERs. We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. The number of culture-positive samples obtained from BCs (13.6%, 57/420) was significantly higher than that obtained from AERs (1.7%, 7/420). In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300) and colonoscopes (20.8%, 25/120) were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300) and AER reprocess to colonoscopes (0.8%, 1/120). Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not.

  16. Direct blood culturing on solid medium outperforms an automated continuously monitored broth-based blood culture system in terms of time to identification and susceptibility testing

    PubMed Central

    Idelevich, E.A.; Grünastel, B.; Peters, G.; Becker, K.

    2015-01-01

    Pathogen identification and antimicrobial susceptibility testing (AST) should be available as soon as possible for patients with bloodstream infections. We investigated whether a lysis-centrifugation (LC) blood culture (BC) method, combined with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) identification and Vitek 2 AST, provides a time advantage in comparison with the currently used automated broth-based BC system. Seven bacterial reference strains were added each to 10 mL human blood in final concentrations of 100, 10 and 1 CFU/mL. Inoculated blood was added to the Isolator 10 tube and centrifuged at 3000 g for 30 min, then 1.5 mL sediment was distributed onto five 150-mm agar plates. Growth was observed hourly and microcolonies were subjected to MALDI-TOF MS and Vitek 2 as soon as possible. For comparison, seeded blood was introduced into an aerobic BC bottle and incubated in the BACTEC 9240 automated BC system. For all species/concentration combinations except one, successful identification and Vitek 2 inoculation were achieved even before growth detection by BACTEC. The fastest identification and inoculation for AST were achieved with Escherichia coli in concentrations of 100 CFU/mL and 10 CFU/mL (after 7 h each, while BACTEC flagged respective samples positive after 9.5 h and 10 h). Use of the LC-BC method allows skipping of incubation in automated BC systems and, used in combination with rapid diagnostics from microcolonies, provides a considerable advantage in time to result. This suggests that the usefulness of direct BC on solid medium should be re-evaluated in the era of rapid microbiology. PMID:26909155

  17. Direct blood culturing on solid medium outperforms an automated continuously monitored broth-based blood culture system in terms of time to identification and susceptibility testing.

    PubMed

    Idelevich, E A; Grünastel, B; Peters, G; Becker, K

    2016-03-01

    Pathogen identification and antimicrobial susceptibility testing (AST) should be available as soon as possible for patients with bloodstream infections. We investigated whether a lysis-centrifugation (LC) blood culture (BC) method, combined with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) identification and Vitek 2 AST, provides a time advantage in comparison with the currently used automated broth-based BC system. Seven bacterial reference strains were added each to 10 mL human blood in final concentrations of 100, 10 and 1 CFU/mL. Inoculated blood was added to the Isolator 10 tube and centrifuged at 3000 g for 30 min, then 1.5 mL sediment was distributed onto five 150-mm agar plates. Growth was observed hourly and microcolonies were subjected to MALDI-TOF MS and Vitek 2 as soon as possible. For comparison, seeded blood was introduced into an aerobic BC bottle and incubated in the BACTEC 9240 automated BC system. For all species/concentration combinations except one, successful identification and Vitek 2 inoculation were achieved even before growth detection by BACTEC. The fastest identification and inoculation for AST were achieved with Escherichia coli in concentrations of 100 CFU/mL and 10 CFU/mL (after 7 h each, while BACTEC flagged respective samples positive after 9.5 h and 10 h). Use of the LC-BC method allows skipping of incubation in automated BC systems and, used in combination with rapid diagnostics from microcolonies, provides a considerable advantage in time to result. This suggests that the usefulness of direct BC on solid medium should be re-evaluated in the era of rapid microbiology.

  18. Parallel experimental design and multivariate analysis provides efficient screening of cell culture media supplements to improve biosimilar product quality.

    PubMed

    Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin

    2017-02-15

    Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;9999: 1-11. © 2017 Wiley Periodicals, Inc.

  19. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration.

    PubMed

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-07

    Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http://bioinf.comav.upv.es/est2uni. This site also provides detailed instructions

  20. Feasibility of implementing an automated culture system for bacteria screening in platelets in the blood bank routine.

    PubMed

    Castro, E; Bueno, J L; Barea, L; González, R

    2005-06-01

    Bacterial contamination of blood components is the principal infectious complication linked to transfusion. The aim of the study was to evaluate the applicability of an automated culture system for platelets. 10 141 platelet concentrates were cultured individually and in pools of five on storage days 1 and 7 using Bact/Alert system aerobic bottles. A modified collection bag was used for improved sampling. Five-millilitre samples were cultured at 37 degrees C for 7 days. Only those samples where the same bacteria were identified in reculture were considered true positives (TP). Homogeneity of proportions was tested by Fisher's exact test. The rate of TP was 30 per 100 000 (95% CI, 6.1-86.4) sampling on day 1; 33 per 100 000 (95% CI, 7-96) on day 7; and 40 per 100 000 (95% CI, 1.28-122.4) if the screening was based on taking both samples (day 1 and 7). Only one TP was detected in the pool testing. The time for detection among TPs on day 1 ranged between 30 and 134 h. The system is not considered practical for use as a routine screening method, as the time for detection is too long. Pool testing is insensitive. Faster screening methods or pathogen-inactivation systems are needed.

  1. Attempts to Automate the Process of Generation of Orthoimages of Objects of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Podlasiak, P.; Zawieska, D.

    2015-02-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. The orthoimage is a cartometric form of photographic presentation of information in the two-dimensional reference system. The paper will discuss the issue of automation of the orthoimage generation basing on the TLS data and digital images. At present attempts are made to apply modern technologies not only for the needs of surveys, but also during the data processing. This paper will present attempts aiming at utilisation of appropriate algorithms and the author's application for automatic generation of the projection plane, for the needs of acquisition of intensity orthoimages from the TLS data. Such planes are defined manually in the majority of popular TLS data processing applications. A separate issue related to the RGB image generation is the orientation of digital images in relation to scans. It is important, in particular in such cases when scans and photographs are not taken simultaneously. This paper will present experiments concerning the utilisation of the SIFT algorithm for automatic matching of intensity orthoimages of the intensity and digital (RGB) photographs. Satisfactory results of the process of automation, as well as in relation to the quality of resulting orthoimages have been obtained.

  2. Rapid Prototyping of a Cyclic Olefin Copolymer Microfluidic Device for Automated Oocyte Culturing.

    PubMed

    Berenguel-Alonso, Miguel; Sabés-Alsina, Maria; Morató, Roser; Ymbern, Oriol; Rodríguez-Vázquez, Laura; Talló-Parra, Oriol; Alonso-Chamarro, Julián; Puyol, Mar; López-Béjar, Manel

    2017-01-01

    Assisted reproductive technology (ART) can benefit from the features of microfluidic technologies, such as the automation of time-consuming labor-intensive procedures, the possibility to mimic in vivo environments, and the miniaturization of the required equipment. To date, most of the proposed approaches are based on polydimethylsiloxane (PDMS) as platform substrate material due to its widespread use in academia, despite certain disadvantages, such as the elevated cost of mass production. Herein, we present a rapid fabrication process for a cyclic olefin copolymer (COC) monolithic microfluidic device combining hot embossing-using a low-temperature cofired ceramic (LTCC) master-and micromilling. The microfluidic device was suitable for trapping and maturation of bovine oocytes, which were further studied to determine their ability to be fertilized. Furthermore, another COC microfluidic device was fabricated to store sperm and assess its quality parameters over time. The study herein presented demonstrates a good biocompatibility of the COC when working with gametes, and it exhibits certain advantages, such as the nonabsorption of small molecules, gas impermeability, and low fabrication costs, all at the prototyping and mass production scale, thus taking a step further toward fully automated microfluidic devices in ART.

  3. Rapid Prototyping of a Cyclic Olefin Copolymer Microfluidic Device for Automated Oocyte Culturing.

    PubMed

    Berenguel-Alonso, Miguel; Sabés-Alsina, Maria; Morató, Roser; Ymbern, Oriol; Rodríguez-Vázquez, Laura; Talló-Parra, Oriol; Alonso-Chamarro, Julián; Puyol, Mar; López-Béjar, Manel

    2017-10-01

    Assisted reproductive technology (ART) can benefit from the features of microfluidic technologies, such as the automation of time-consuming labor-intensive procedures, the possibility to mimic in vivo environments, and the miniaturization of the required equipment. To date, most of the proposed approaches are based on polydimethylsiloxane (PDMS) as platform substrate material due to its widespread use in academia, despite certain disadvantages, such as the elevated cost of mass production. Herein, we present a rapid fabrication process for a cyclic olefin copolymer (COC) monolithic microfluidic device combining hot embossing-using a low-temperature cofired ceramic (LTCC) master-and micromilling. The microfluidic device was suitable for trapping and maturation of bovine oocytes, which were further studied to determine their ability to be fertilized. Furthermore, another COC microfluidic device was fabricated to store sperm and assess its quality parameters over time. The study herein presented demonstrates a good biocompatibility of the COC when working with gametes, and it exhibits certain advantages, such as the nonabsorption of small molecules, gas impermeability, and low fabrication costs, all at the prototyping and mass production scale, thus taking a step further toward fully automated microfluidic devices in ART.

  4. Quantitative analysis of a biopharmaceutical protein in cell culture samples using automated capillary electrophoresis (CE) western blot.

    PubMed

    Xu, Dong; Marchionni, Kentaro; Hu, Yunli; Zhang, Wei; Sosic, Zoran

    2017-10-25

    An effective control strategy is critical to ensure the safety, purity and potency of biopharmaceuticals. Appropriate analytical tools are needed to realize such goals by providing information on product quality at an early stage to help understanding and control of the manufacturing process. In this work, a fully automated, multi-capillary instrument is utilized for size-based separation and western blot analysis to provide an early readout on product quality in order to enable a more consistent manufacturing process. This approach aims at measuring two important qualities of a biopharmaceutical protein, titer and isoform distribution, in cell culture harvest samples. The acquired data for isoform distribution can then be used to predict the corresponding values of the final drug substance, and potentially provide information for remedy through timely adjustment of the downstream purification process, should the expected values fall out of the accepted range. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. PROJECT FOR AN AUTOMATED PRIMARY-GRADE READING AND ARITHMETIC CURRICULUM FOR CULTURALLY-DEPRIVED CHILDREN. PROGRESS REPORT NUMBER 5, JULY 1 TO DECEMBER 31, 1966.

    ERIC Educational Resources Information Center

    ATKINSON, RICHARD C.; SUPPES, PATRICK

    THIS REPORT ON THE PROGRESS OF THE IBM 1800/1500 CAI SYSTEM, AN AUTOMATED READING AND ARITHMETIC CURRICULUM FOR CULTURALLY DEPRIVED CHILDREN IN THE PRIMARY GRADES, DISCUSSES THE PROBLEMS INVOLVED IN GETTING THE SYSTEM INTO OPERATION IN THE BRENTWOOD SCHOOL IN STANFORD, CALIF. THE OPERATIONAL FEATURES OF THIS IBM SYSTEM AND THE METHODS BY WHICH THE…

  6. Automated Method for the Rapid and Precise Estimation of Adherent Cell Culture Characteristics from Phase Contrast Microscopy Images

    PubMed Central

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-01-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521

  7. Automated analysis of food-borne pathogens using a novel microbial cell culture, sensing and classification system.

    PubMed

    Xiang, Kun; Li, Yinglei; Ford, William; Land, Walker; Schaffer, J David; Congdon, Robert; Zhang, Jing; Sadik, Omowunmi

    2016-02-21

    We hereby report the design and implementation of an Autonomous Microbial Cell Culture and Classification (AMC(3)) system for rapid detection of food pathogens. Traditional food testing methods require multistep procedures and long incubation period, and are thus prone to human error. AMC(3) introduces a "one click approach" to the detection and classification of pathogenic bacteria. Once the cultured materials are prepared, all operations are automatic. AMC(3) is an integrated sensor array platform in a microbial fuel cell system composed of a multi-potentiostat, an automated data collection system (Python program, Yocto Maxi-coupler electromechanical relay module) and a powerful classification program. The classification scheme consists of Probabilistic Neural Network (PNN), Support Vector Machines (SVM) and General Regression Neural Network (GRNN) oracle-based system. Differential Pulse Voltammetry (DPV) is performed on standard samples or unknown samples. Then, using preset feature extractions and quality control, accepted data are analyzed by the intelligent classification system. In a typical use, thirty-two extracted features were analyzed to correctly classify the following pathogens: Escherichia coli ATCC#25922, Escherichia coli ATCC#11775, and Staphylococcus epidermidis ATCC#12228. 85.4% accuracy range was recorded for unknown samples, and within a shorter time period than the industry standard of 24 hours.

  8. High-Throughput, Automated Protein A Purification Platform with Multiattribute LC-MS Analysis for Advanced Cell Culture Process Monitoring.

    PubMed

    Dong, Jia; Migliore, Nicole; Mehrman, Steven J; Cunningham, John; Lewis, Michael J; Hu, Ping

    2016-09-06

    The levels of many product related variants observed during the production of monoclonal antibodies are dependent on control of the manufacturing process, especially the cell culture process. However, it is difficult to characterize samples pulled from the bioreactor due to the low levels of product during the early stages of the process and the high levels of interfering reagents. Furthermore, analytical results are often not available for several days, which slows the process development cycle and prevents "real time" adjustments to the manufacturing process. To reduce the delay and enhance our ability to achieve quality targets, we have developed a low-volume, high-throughput, and high-content analytical platform for at-line product quality analysis. This workflow includes an automated, 96-well plate protein A purification step to isolate antibody product from the cell culture fermentation broth, followed by rapid, multiattribute LC-MS analysis. We have demonstrated quantitative correlations between particular process parameters with the levels of glycosylated and glycated species in a series of small scale experiments, but the platform could be used to monitor other attributes and applied across the biopharmaceutical industry.

  9. Evaluation of the Paratrend Multi-Analyte Sensor for Potential Utilization in Long-Duration Automated Cell Culture Monitoring

    NASA Technical Reports Server (NTRS)

    Hwang, Emma Y.; Pappas, Dimitri; Jeevarajan, Antony S.; Anderson, Melody M.

    2004-01-01

    BACKGROUND: Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments. While several single-analyte sensors exist to measure culture health, a multi-analyte sensor would simplify the cell culture system. One such multi-analyte sensor, the Paratrend 7 manufactured by Diametrics Medical, consists of three optical fibers for measuring pH, dissolved carbon dioxide (pCO(2)), dissolved oxygen (pO(2)), and a thermocouple to measure temperature. The sensor bundle was designed for intra-vascular measurements in clinical settings, and can be used in bioreactors operated both on the ground and in NASA's Space Shuttle and International Space Station (ISS) experiments. METHODS: A Paratrend 7 sensor was placed at the outlet of a bioreactor inoculated with BHK-21 (baby hamster kidney) cells. The pH, pCO(2), pO(2), and temperature data were transferred continuously to an external computer. Cell culture medium, manually extracted from the bioreactor through a sampling port, was also assayed using a bench top blood gas analyzer (BGA). RESULTS: Two Paratrend 7 sensors were used over a single cell culture experiment (64 days). When compared to the manually obtained BGA samples, the sensor had good agreement for pH, pCO(2), and pO(2) with bias (and precision) 0.005(0.024), 8.0 mmHg (4.4 mmHg), and 11 mmHg (17 mmHg), respectively for the first two sensors. A third Paratrend sensor (operated for 141 days) had similar agreement (0.02+/-0.15 for pH, -4+/-8 mm Hg for pCO(2), and 24+/-18 mmHg for pO(2)). CONCLUSION: The resulting biases and precisions are com- parable to Paratrend sensor clinical results. Although the pO(2) differences may be acceptable for clinically relevant measurement ranges, the O(2) sensor in this bundle may not be reliable enough for the ranges of pO(2) in these cell culture studies without periodic calibration.

  10. Evaluation of the paratrend multi-analyte sensor for potential utilization in long-duration automated cell culture monitoring.

    PubMed

    Hwang, Emma Y; Pappas, Dimitri; Jeevarajan, Antony S; Anderson, Melody M

    2004-09-01

    Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments. While several single-analyte sensors exist to measure culture health, a multi-analyte sensor would simplify the cell culture system. One such multi-analyte sensor, the Paratrend 7 manufactured by Diametrics Medical, consists of three optical fibers for measuring pH, dissolved carbon dioxide (pCO(2)), dissolved oxygen (pO(2)), and a thermocouple to measure temperature. The sensor bundle was designed for intra-vascular measurements in clinical settings, and can be used in bioreactors operated both on the ground and in NASA's Space Shuttle and International Space Station (ISS) experiments. A Paratrend 7 sensor was placed at the outlet of a bioreactor inoculated with BHK-21 (baby hamster kidney) cells. The pH, pCO(2), pO(2), and temperature data were transferred continuously to an external computer. Cell culture medium, manually extracted from the bioreactor through a sampling port, was also assayed using a bench top blood gas analyzer (BGA). Two Paratrend 7 sensors were used over a single cell culture experiment (64 days). When compared to the manually obtained BGA samples, the sensor had good agreement for pH, pCO(2), and pO(2) with bias (and precision) 0.005(0.024), 8.0 mmHg (4.4 mmHg), and 11 mmHg (17 mmHg), respectively for the first two sensors. A third Paratrend sensor (operated for 141 days) had similar agreement (0.02+/-0.15 for pH, -4+/-8 mm Hg for pCO(2), and 24+/-18 mmHg for pO(2)). The resulting biases and precisions are com- parable to Paratrend sensor clinical results. Although the pO(2) differences may be acceptable for clinically relevant measurement ranges, the O(2) sensor in this bundle may not be reliable enough for the ranges of pO(2) in these cell culture studies without periodic calibration.

  11. Evaluation of the Paratrend Multi-Analyte Sensor for Potential Utilization in Long-Duration Automated Cell Culture Monitoring

    NASA Technical Reports Server (NTRS)

    Hwang, Emma Y.; Pappas, Dimitri; Jeevarajan, Antony S.; Anderson, Melody M.

    2004-01-01

    BACKGROUND: Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments. While several single-analyte sensors exist to measure culture health, a multi-analyte sensor would simplify the cell culture system. One such multi-analyte sensor, the Paratrend 7 manufactured by Diametrics Medical, consists of three optical fibers for measuring pH, dissolved carbon dioxide (pCO(2)), dissolved oxygen (pO(2)), and a thermocouple to measure temperature. The sensor bundle was designed for intra-vascular measurements in clinical settings, and can be used in bioreactors operated both on the ground and in NASA's Space Shuttle and International Space Station (ISS) experiments. METHODS: A Paratrend 7 sensor was placed at the outlet of a bioreactor inoculated with BHK-21 (baby hamster kidney) cells. The pH, pCO(2), pO(2), and temperature data were transferred continuously to an external computer. Cell culture medium, manually extracted from the bioreactor through a sampling port, was also assayed using a bench top blood gas analyzer (BGA). RESULTS: Two Paratrend 7 sensors were used over a single cell culture experiment (64 days). When compared to the manually obtained BGA samples, the sensor had good agreement for pH, pCO(2), and pO(2) with bias (and precision) 0.005(0.024), 8.0 mmHg (4.4 mmHg), and 11 mmHg (17 mmHg), respectively for the first two sensors. A third Paratrend sensor (operated for 141 days) had similar agreement (0.02+/-0.15 for pH, -4+/-8 mm Hg for pCO(2), and 24+/-18 mmHg for pO(2)). CONCLUSION: The resulting biases and precisions are com- parable to Paratrend sensor clinical results. Although the pO(2) differences may be acceptable for clinically relevant measurement ranges, the O(2) sensor in this bundle may not be reliable enough for the ranges of pO(2) in these cell culture studies without periodic calibration.

  12. Trypanosoma cruzi infectivity assessment in "in vitro" culture systems by automated cell counting.

    PubMed

    Liempi, Ana; Castillo, Christian; Cerda, Mauricio; Droguett, Daniel; Duaso, Juan; Barahona, Katherine; Hernández, Ariane; Díaz-Luján, Cintia; Fretes, Ricardo; Härtel, Steffen; Kemmerling, Ulrike

    2015-03-01

    Chagas disease is an endemic, neglected tropical disease in Latin America that is caused by the protozoan parasite Trypanosoma cruzi. In vitro models constitute the first experimental approach to study the physiopathology of the disease and to assay potential new trypanocidal agents. Here, we report and describe clearly the use of commercial software (MATLAB(®)) to quantify T. cruzi amastigotes and infected mammalian cells (BeWo) and compared this analysis with the manual one. There was no statistically significant difference between the manual and the automatic quantification of the parasite; the two methods showed a correlation analysis r(2) value of 0.9159. The most significant advantage of the automatic quantification was the efficiency of the analysis. The drawback of this automated cell counting method was that some parasites were assigned to the wrong BeWo cell, however this data did not exceed 5% when adequate experimental conditions were chosen. We conclude that this quantification method constitutes an excellent tool for evaluating the parasite load in cells and therefore constitutes an easy and reliable ways to study parasite infectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Characterization and classification of adherent cells in monolayer culture using automated tracking and evolutionary algorithms.

    PubMed

    Zhang, Zhen; Bedder, Matthew; Smith, Stephen L; Walker, Dawn; Shabir, Saqib; Southgate, Jennifer

    2016-08-01

    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24h period. Subsequent analysis following feature extraction demonstrated the ability of the technique to successfully separate the modulated classes of cell using evolutionary algorithms. Specifically, a Cartesian Genetic Program (CGP) network was evolved that identified average migration speed, in-contact angular velocity, cohesivity and average cell clump size as the principal features contributing to the separation. Our approach not only provides non-biased and parsimonious insight into modulated class behaviours, but can be extracted as mathematical formulae for the parameterization of computational models. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  14. Openly accessible microfluidic liquid handlers for automated high-throughput nanoliter cell culture.

    PubMed

    Zhou, Ying; Pang, Yuhong; Huang, Yanyi

    2012-03-06

    Cell culture is typically performed in Petri dishes, with a few million cells growing together, or in microwell plates with thousands of cells in each compartment. When the throughput of each experiment, especially of screening based assays, is increased, even using microliter solution per well will cost a considerable amount of cells and reagents. We took a rational approach to reduce the volume of each cell culture chamber. We designed and fabricated a poly(dimethylsiloxane) based liquid pipet chip to deliver and transfer nanoliter (50-500 nL) samples and reagents with high accuracy and robustness. A few tens to a few hundreds of cells can be successfully seeded, transferred, passaged, transfected, and stimulated by drugs on a microwell chip using this pipet chip automatically. We have used this system to test the cell growth dynamically, observed the correlation between the culture conditions and cell viabilities, and quantitatively evaluated cell apoptosis induced by cis-diammineplatinum(II) dichloride (cisplatin). This system shows great potential to facilitate large-scale screening and high-throughput cell-array based bioassays with the volume of each individual cell colony at the nanoliter level.

  15. Comparative evaluation of the role of single and multiple blood specimens in the outcome of blood cultures using BacT/ALERT 3D (automated) blood culture system in a tertiary care hospital

    PubMed Central

    Elantamilan, D.; Lyngdoh, Valarie Wihiwot; Khyriem, Annie B.; Rajbongshi, Jyotismita; Bora, Ishani; Devi, Surbala Thingujam; Bhattacharyya, Prithwis; Barman, Himesh

    2016-01-01

    Introduction: Bloodstream infection (BSI) is a leading cause of mortality in critically ill patients. The mortality directly attributable to BSI has been estimated to be around 16% and 40% in general hospital population and Intensive Care Unit (ICU) population, respectively. The detection rate of these infections increases with the number of blood samples obtained for culture. The newer continuous monitoring automated blood culture systems with enhanced culture media show increased yield and sensitivity. Hence, we aimed at studying the role of single and multiple blood specimens from different sites at the same time in the outcome of automated blood culture system. Materials and Methods and Results: A total of 1054 blood culture sets were analyzed over 1 year, the sensitivity of one, two, and three samples in a set was found to be 85.67%, 96.59%, and 100%, respectively, which showed a statistically significant difference (P < 0.0001). Similar findings were seen in few more studies, however, among individual organisms in contrast to other studies, the isolation rates of Gram-positive bacteria were less than that of Gram-negative Bacilli with one (or first) sample in a blood culture set. In our study, despite using BacT/ALERT three-dimensional continuous culture monitoring system with FAN plus culture bottles, 15% of positive cultures would have been missed if only a single sample was collected in a blood culture set. Conclusion: The variables like the volume of blood and number of samples collected from different sites still play a major role in the outcome of these automated blood culture systems. PMID:27688629

  16. [Automated methods of culture determination of M. tuberculosis in liquid media].

    PubMed

    Irtuganova, O A; Smirnova, N S; Slogotskaia, L V; Moroz, A M; Litvinov, V I

    2001-01-01

    A hundred and seventy respiratory samples from patients with different forms of tuberculosis were used to test the efficiency of the automatic liquid culture systems BACTEC MGIT 960 and MB/BacT with inoculation into the standard dense media. All these media provided 47 M. tuberculous isolates, of them 41 (87.2%), 38 (80.9%), and 76.6% on the BACTER 960, MB/BacT, and dense media, respectively. The average time of detection of mycobacterial growth by means of automatic systems was much shorter and equal to 10.7 days on the BACTEC 960 and 18.7 days on the MB/BacT versus 33.2 days on the standard dense medium. In terms of their sensitivity and detection rate, the automatic systems were superior to the dense media widely used in laboratory practice.

  17. Establishment of a fully automated microtiter plate-based system for suspension cell culture and its application for enhanced process optimization.

    PubMed

    Markert, Sven; Joeris, Klaus

    2017-01-01

    We developed an automated microtiter plate (MTP)-based system for suspension cell culture to meet the increased demands for miniaturized high throughput applications in biopharmaceutical process development. The generic system is based on off-the-shelf commercial laboratory automation equipment and is able to utilize MTPs of different configurations (6-24 wells per plate) in orbital shaken mode. The shaking conditions were optimized by Computational Fluid Dynamics simulations. The fully automated system handles plate transport, seeding and feeding of cells, daily sampling, and preparation of analytical assays. The integration of all required analytical instrumentation into the system enables a hands-off operation which prevents bottlenecks in sample processing. The modular set-up makes the system flexible and adaptable for a continuous extension of analytical parameters and add-on components. The system proved suitable as screening tool for process development by verifying the comparability of results for the MTP-based system and bioreactors regarding profiles of viable cell density, lactate, and product concentration of CHO cell lines. These studies confirmed that 6 well MTPs as well as 24 deepwell MTPs were predictive for a scale up to a 1000 L stirred tank reactor (scale factor 1:200,000). Applying the established cell culture system for automated media blend screening in late stage development, a 22% increase in product yield was achieved in comparison to the reference process. The predicted product increase was subsequently confirmed in 2 L bioreactors. Thus, we demonstrated the feasibility of the automated MTP-based cell culture system for enhanced screening and optimization applications in process development and identified further application areas such as process robustness. The system offers a great potential to accelerate time-to-market for new biopharmaceuticals. Biotechnol. Bioeng. 2017;114: 113-121. © 2016 Wiley Periodicals, Inc. © 2016 Wiley

  18. Analysis of the disagreement between automated bioluminescence-based and culture methods for detecting significant bacteriuria, with proposals for standardizing evaluations of bacteriuria detection methods.

    PubMed Central

    Nichols, W W; Curtis, G D; Johnston, H H

    1982-01-01

    A fully automated method for detecting significant bacteriuria is described which uses firefly luciferin and luciferase to detect bacterial ATP in urine. The automated method was calibrated and evaluated, using 308 urine specimens, against two reference culture methods. We obtained a specificity of 0.79 and sensitivity of 0.75 using a quantitative pour plate reference test and a specificity of 0.79 and a sensitivity of 0.90 using a semiquantitative standard loop reference test. The majority of specimens negative by the automated test but positive by the pour plate reference test were specimens which grew several bacterial species. We suggest that such disagreement was most likely for urine containing around 10(5) colony-forming units per ml (the culture threshold of positivity) and that these specimens were ones contaminated by urethral or vaginal flora. We propose standard procedures for calibrating and evaluating rapid or automated methods for the detection of significant bacteriuria and have analyzed our results using these procedures. We recommend that identical analyses should be reported for other evaluations of bacteriuria detection methods. PMID:6808012

  19. Evaluation of a new generation of culture bottle using an automated bacterial culture system for detecting nine common contaminating organisms found in platelet components.

    PubMed

    Brecher, M E; Heath, D G; Hay, S N; Rothenberg, S J; Stutzman, L C

    2002-06-01

    An automated bacterial culture system (BacT/ALERT 3D, bioMérieux) has been previously validated with a variety of bacteria in platelets. The recovery of bacteria in platelets using a new generation of culture bottles that do not require venting and that use a liquid emulsion sensor was studied. Bacillus cereus, Enterobacter cloacae, Escherichia coli, Klebsiella oxytoca, Staphylococcus aureus, Staphylococcus epidermidis, Serratia marcescens, Streptococcus viridans, and Propionibacterium acnes isolates were inoculated into Day 2 platelets to concentrations of 10 and 100 CFU per mL. Samples were then studied with current and new aerobic, anaerobic, and pediatric bottles. All organisms, except P. acnes, were detected in a mean time of 9.2 to 20.4 (10 CFU/mL) or 8.7 to 18.6 (100 CFU/mL) hours. P. acnes was detected in a mean time of 69.2 (10 CFU/mL) or 66.0 (100 CFU/mL) hours. The 10-fold increase in inoculum was associated with a mean 9.2 percent difference in detection time. The aerobic, anaerobic, and pediatric bottles had a mean difference in detection time (hours) between the current and new bottles of 0.10 (p=0.61), 0.4 (p=0.38), and 1.0 (p < 0.001), respectively. No difference in detection time between the current and new aerobic and anaerobic bottles was demonstrated. The new pediatric bottles had a small but significant delay in detection.

  20. Automated Voxel Model from Point Clouds for Structural Analysis of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Bitelli, G.; Castellazzi, G.; D'Altri, A. M.; De Miranda, S.; Lambertini, A.; Selvaggi, I.

    2016-06-01

    In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM) of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy) that was hit by an earthquake in 2012.

  1. Start/Pat; A parallel-programming toolkit

    SciTech Connect

    Appelbe, B.; Smith, K. ); McDowell, C. )

    1989-07-01

    How can you make Fortran code parallel without isolating the programmer from learning to understand and exploit parallelism effectively. With an interactive toolkit that automates parallelization as it educates. This paper discusses the Start/Pat toolkit.

  2. Serratia marcescens strains implicated in adverse transfusion reactions form biofilms in platelet concentrates and demonstrate reduced detection by automated culture.

    PubMed

    Greco-Stewart, V S; Brown, E E; Parr, C; Kalab, M; Jacobs, M R; Yomtovian, R A; Ramírez-Arcos, S M

    2012-04-01

    Serratia marcescens is a gram-negative bacterium that has been implicated in adverse transfusion reactions associated with contaminated platelet concentrates. The aim of this study was to investigate whether the ability of S. marcescens to form surface-attached aggregates (biofilms) could account for contaminated platelet units being missed during screening by the BacT/ALERT automated culture system. Seven S. marcescens strains, including biofilm-positive and biofilm-negative control strains and five isolates recovered from contaminated platelet concentrates, were grown in enriched Luria-Bertani medium and in platelets. Biofilm formation was examined by staining assay, dislodging experiments and scanning electron microscopy. Clinical strains were also analysed for their ability to evade detection by the BacT/ALERT system. All strains exhibited similar growth in medium and platelets. While only the biofilm-positive control strain formed biofilms in medium, this strain and three clinical isolates associated with transfusion reactions formed biofilms in platelet concentrates. The other two clinical strains, which had been captured during platelet screening by BacT/ALERT, failed to form biofilms in platelets. Biofilm-forming clinical isolates were approximately three times (P<0·05) more likely to be missed by BacT/ALERT screening than biofilm-negative strains. S. marcescens strains associated with transfusion reactions form biofilms under platelet storage conditions, and initial biofilm formation correlates with missed detection of contaminated platelet concentrates by the BacT/ALERT system. © 2011 The Author(s). Vox Sanguinis © 2011 International Society of Blood Transfusion.

  3. The Use of Two Culturing Methods in Parallel Reveals a High Prevalence and Diversity of Arcobacter spp. in a Wastewater Treatment Plant

    PubMed Central

    2016-01-01

    The genus Arcobacter includes species considered emerging food and waterborne pathogens. Despite Arcobacter has been linked to the presence of faecal pollution, few studies have investigated its prevalence in wastewater, and the only isolated species were Arcobacter butzleri and Arcobacter cryaerophilus. This study aimed to establish the prevalence of Arcobacter spp. at a WWTP using in parallel two culturing methods (direct plating and culturing after enrichment) and a direct detection by m-PCR. In addition, the genetic diversity of the isolates was established using the ERIC-PCR genotyping method. Most of the wastewater samples (96.7%) were positive for Arcobacter and a high genetic diversity was observed among the 651 investigated isolates that belonged to 424 different ERIC genotypes. However, only few strains persisted at different dates or sampling points. The use of direct plating in parallel with culturing after enrichment allowed recovering the species A. butzleri, A. cryaerophilus, Arcobacter thereius, Arcobacter defluvii, Arcobacter skirrowii, Arcobacter ellisii, Arcobacter cloacae, and Arcobacter nitrofigilis, most of them isolated for the first time from wastewater. The predominant species was A. butzleri, however, by direct plating predominated A. cryaerophilus. Therefore, the overall predominance of A. butzleri was a bias associated with the use of enrichment. PMID:27981053

  4. The Use of Two Culturing Methods in Parallel Reveals a High Prevalence and Diversity of Arcobacter spp. in a Wastewater Treatment Plant.

    PubMed

    Levican, Arturo; Collado, Luis; Figueras, Maria José

    2016-01-01

    The genus Arcobacter includes species considered emerging food and waterborne pathogens. Despite Arcobacter has been linked to the presence of faecal pollution, few studies have investigated its prevalence in wastewater, and the only isolated species were Arcobacter butzleri and Arcobacter cryaerophilus. This study aimed to establish the prevalence of Arcobacter spp. at a WWTP using in parallel two culturing methods (direct plating and culturing after enrichment) and a direct detection by m-PCR. In addition, the genetic diversity of the isolates was established using the ERIC-PCR genotyping method. Most of the wastewater samples (96.7%) were positive for Arcobacter and a high genetic diversity was observed among the 651 investigated isolates that belonged to 424 different ERIC genotypes. However, only few strains persisted at different dates or sampling points. The use of direct plating in parallel with culturing after enrichment allowed recovering the species A. butzleri, A. cryaerophilus, Arcobacter thereius, Arcobacter defluvii, Arcobacter skirrowii, Arcobacter ellisii, Arcobacter cloacae, and Arcobacter nitrofigilis, most of them isolated for the first time from wastewater. The predominant species was A. butzleri, however, by direct plating predominated A. cryaerophilus. Therefore, the overall predominance of A. butzleri was a bias associated with the use of enrichment.

  5. Structure_threader: An improved method for automation and parallelization of programs structure, fastStructure and MavericK on multicore CPU systems.

    PubMed

    Pina-Martins, Francisco; Silva, Diogo N; Fino, Joana; Paulo, Octávio S

    2017-08-04

    Structure_threader is a program to parallelize multiple runs of genetic clustering software that does not make use of multithreading technology (structure, fastStructure and MavericK) on multicore computers. Our approach was benchmarked across multiple systems and displayed great speed improvements relative to the single-threaded implementation, scaling very close to linearly with the number of physical cores used. Structure_threader was compared to previous software written for the same task-ParallelStructure and StrAuto and was proven to be the faster (up to 25% faster) wrapper under all tested scenarios. Furthermore, Structure_threader can perform several automatic and convenient operations, assisting the user in assessing the most biologically likely value of 'K' via implementations such as the "Evanno," or "Thermodynamic Integration" tests and automatically draw the "meanQ" plots (static or interactive) for each value of K (or even combined plots). Structure_threader is written in python 3 and licensed under the GPLv3. It can be downloaded free of charge at https://github.com/StuntsPT/Structure_threader. © 2017 John Wiley & Sons Ltd.

  6. Reductions in self-reported stress and anticipatory heart rate with the use of a semi-automated parallel parking system.

    PubMed

    Reimer, Bryan; Mehler, Bruce; Coughlin, Joseph F

    2016-01-01

    Drivers' reactions to a semi-autonomous technology for assisted parallel parking system were evaluated in a field experiment. A sample of 42 drivers balanced by gender and across three age groups (20-29, 40-49, 60-69) were given a comprehensive briefing, saw the technology demonstrated, practiced parallel parking 3 times each with and without the assistive technology, and then were assessed on an additional 3 parking events each with and without the technology. Anticipatory stress, as measured by heart rate, was significantly lower when drivers approached a parking space knowing that they would be using the assistive technology as opposed to manually parking. Self-reported stress levels following assisted parks were also lower. Thus, both subjective and objective data support the position that the assistive technology reduced stress levels in drivers who were given detailed training. It was observed that drivers decreased their use of turn signals when using the semi-autonomous technology, raising a caution concerning unintended lapses in safe driving behaviors that may occur when assistive technologies are used.

  7. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    PubMed

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data.

  8. Toward fully automated high performance computing drug discovery: a massively parallel virtual screening pipeline for docking and molecular mechanics/generalized Born surface area rescoring to improve enrichment.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2014-01-27

    In this work we announce and evaluate a high throughput virtual screening pipeline for in-silico screening of virtual compound databases using high performance computing (HPC). Notable features of this pipeline are an automated receptor preparation scheme with unsupervised binding site identification. The pipeline includes receptor/target preparation, ligand preparation, VinaLC docking calculation, and molecular mechanics/generalized Born surface area (MM/GBSA) rescoring using the GB model by Onufriev and co-workers [J. Chem. Theory Comput. 2007, 3, 156-169]. Furthermore, we leverage HPC resources to perform an unprecedented, comprehensive evaluation of MM/GBSA rescoring when applied to the DUD-E data set (Directory of Useful Decoys: Enhanced), in which we selected 38 protein targets and a total of ∼0.7 million actives and decoys. The computer wall time for virtual screening has been reduced drastically on HPC machines, which increases the feasibility of extremely large ligand database screening with more accurate methods. HPC resources allowed us to rescore 20 poses per compound and evaluate the optimal number of poses to rescore. We find that keeping 5-10 poses is a good compromise between accuracy and computational expense. Overall the results demonstrate that MM/GBSA rescoring has higher average receiver operating characteristic (ROC) area under curve (AUC) values and consistently better early recovery of actives than Vina docking alone. Specifically, the enrichment performance is target-dependent. MM/GBSA rescoring significantly out performs Vina docking for the folate enzymes, kinases, and several other enzymes. The more accurate energy function and solvation terms of the MM/GBSA method allow MM/GBSA to achieve better enrichment, but the rescoring is still limited by the docking method to generate the poses with the correct binding modes.

  9. Enhancing the usability and performance of structured association mapping algorithms using automation, parallelization, and visualization in the GenAMap software system

    PubMed Central

    2012-01-01

    Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660

  10. Performance Equivalence and Validation of the Soleris Automated System for Quantitative Microbial Content Testing Using Pure Suspension Cultures.

    PubMed

    Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl

    2016-09-01

    Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.

  11. Use of an automated PCR assay, the GenomEra S. pneumoniae, for rapid detection of Streptococcus pneumoniae in blood cultures.

    PubMed

    Hirvonen, Jari J; Seiskari, Tapio; Harju, Inka; Rantakokko-Jalava, Kaisu; Vuento, Risto; Aittoniemi, Janne

    2015-01-01

    Streptococcus pneumoniae is recognized as a major cause of pneumonia, meningitis, and bacteremia. Since the mortality rate for pneumococcal bacteremia remains high, the reliable detection of the bacterium in blood samples is important. In this study, the performance of a new automated PCR assay, the GenomEra(™) S. pneumoniae, for direct detection of S. pneumoniae in blood cultures was investigated. In total, 200 samples were analyzed, including 90 previously identified culture collection isolates and 110 blood culture specimens. The species identification was confirmed with routine diagnostic methods including MALDI-TOF or 16S rDNA sequencing. From culture collection, the GenomEra S. pneumoniae assay correctly identified all 37 S. pneumoniae isolates consisting of 18 different serotypes, while all 53 non-S. pneumoniae isolates yielded negative test results. Of 110 blood culture specimens, 46 grew S. pneumoniae and all were positive by the GenomEra assay direct from bottle. The detection sensitivity and specificity of the GenomEra assay for direct analysis of S. pneumoniae in signal positive blood culture bottles was 100%, respectively. With a straightforward sample preparation protocol of blood cultures the results were available within 55 min, thus being significantly quicker than by the routinely used identification methods (18-48 h). The two-step, time-resolved fluorometric measurement mode employed by the GenomEra CDX(™) instrument showed no interference from blood or charcoal. The GenomEra S. pneumoniae assay is a tool that performs well for the rapid and reliable detection of S. pneumoniae in blood cultures.

  12. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  13. Rapid and accurate direct antibiotic susceptibility testing of blood culture broths using MALDI Sepsityper combined with the BD Phoenix automated system.

    PubMed

    Hazelton, Briony; Thomas, Lee C; Olma, Thomas; Kok, Jen; O'Sullivan, Matthew; Chen, Sharon C-A; Iredell, Jonathan R

    2014-12-01

    Antibiotic susceptibility testing with the BD Phoenix system on bacterial cell pellets generated from blood culture broths using the Bruker MALDI Sepsityper kit was evaluated. Seventy-six Gram-negative isolates, including 12 with defined multi-resistant phenotypes, had antibiotic susceptibility testing (AST) performed by Phoenix on the cell pellet in parallel with conventional methods. In total, 1414/1444 (97.9 %) of susceptibility tests were concordant, with only 1 (0.07 %) very major error. This novel method has the potential to reduce the turnaround time for AST results by up to a day for Gram-negative bacteraemias.

  14. Systematic review automation technologies.

    PubMed

    Tsafnat, Guy; Glasziou, Paul; Choong, Miew Keen; Dunn, Adam; Galgani, Filippo; Coiera, Enrico

    2014-07-09

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.

  15. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  16. Capillary electrophoresis for automated on-line monitoring of suspension cultures: Correlating cell density, nutrients and metabolites in near real-time.

    PubMed

    Alhusban, Ala A; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M

    2016-05-12

    Increasingly stringent demands on the production of biopharmaceuticals demand monitoring of process parameters that impact on their quality. We developed an automated platform for on-line, near real-time monitoring of suspension cultures by integrating microfluidic components for cell counting and filtration with a high-resolution separation technique. This enabled the correlation of the growth of a human lymphocyte cell line with changes in the essential metabolic markers, glucose, glutamine, leucine/isoleucine and lactate, determined by Sequential Injection-Capillary Electrophoresis (SI-CE). Using 8.1 mL of media (41 μL per run), the metabolic status and cell density were recorded every 30 min over 4 days. The presented platform is flexible, simple and automated and allows for fast, robust and sensitive analysis with low sample consumption and high sample throughput. It is compatible with up- and out-scaling, and as such provides a promising new solution to meet the future demands in process monitoring in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Evaluation of an Automated Rapid Diagnostic Assay for Detection of Gram-Negative Bacteria and Their Drug-Resistance Genes in Positive Blood Cultures

    PubMed Central

    Tojo, Masayoshi; Fujita, Takahiro; Ainoda, Yusuke; Nagamatsu, Maki; Hayakawa, Kayoko; Mezaki, Kazuhisa; Sakurai, Aki; Masui, Yoshinori; Yazaki, Hirohisa; Takahashi, Hiroshi; Miyoshi-Akiyama, Tohru; Totsuka, Kyoichi; Kirikae, Teruo; Ohmagari, Norio

    2014-01-01

    We evaluated the performance of the Verigene Gram-Negative Blood Culture Nucleic Acid Test (BC-GN; Nanosphere, Northbrook, IL, USA), an automated multiplex assay for rapid identification of positive blood cultures caused by 9 Gram-negative bacteria (GNB) and for detection of 9 genes associated with β-lactam resistance. The BC-GN assay can be performed directly from positive blood cultures with 5 minutes of hands-on and 2 hours of run time per sample. A total of 397 GNB positive blood cultures were analyzed using the BC-GN assay. Of the 397 samples, 295 were simulated samples prepared by inoculating GNB into blood culture bottles, and the remaining were clinical samples from 102 patients with positive blood cultures. Aliquots of the positive blood cultures were tested by the BC-GN assay. The results of bacterial identification between the BC-GN assay and standard laboratory methods were as follows: Acinetobacter spp. (39 isolates for the BC-GN assay/39 for the standard methods), Citrobacter spp. (7/7), Escherichia coli (87/87), Klebsiella oxytoca (13/13), and Proteus spp. (11/11); Enterobacter spp. (29/30); Klebsiella pneumoniae (62/72); Pseudomonas aeruginosa (124/125); and Serratia marcescens (18/21); respectively. From the 102 clinical samples, 104 bacterial species were identified with the BC-GN assay, whereas 110 were identified with the standard methods. The BC-GN assay also detected all β-lactam resistance genes tested (233 genes), including 54 blaCTX-M, 119 blaIMP, 8 blaKPC, 16 blaNDM, 24 blaOXA-23, 1 blaOXA-24/40, 1 blaOXA-48, 4 blaOXA-58, and 6 blaVIM. The data shows that the BC-GN assay provides rapid detection of GNB and β-lactam resistance genes in positive blood cultures and has the potential to contributing to optimal patient management by earlier detection of major antimicrobial resistance genes. PMID:24705449

  18. Evaluation of an automated rapid diagnostic assay for detection of Gram-negative bacteria and their drug-resistance genes in positive blood cultures.

    PubMed

    Tojo, Masayoshi; Fujita, Takahiro; Ainoda, Yusuke; Nagamatsu, Maki; Hayakawa, Kayoko; Mezaki, Kazuhisa; Sakurai, Aki; Masui, Yoshinori; Yazaki, Hirohisa; Takahashi, Hiroshi; Miyoshi-Akiyama, Tohru; Totsuka, Kyoichi; Kirikae, Teruo; Ohmagari, Norio

    2014-01-01

    We evaluated the performance of the Verigene Gram-Negative Blood Culture Nucleic Acid Test (BC-GN; Nanosphere, Northbrook, IL, USA), an automated multiplex assay for rapid identification of positive blood cultures caused by 9 Gram-negative bacteria (GNB) and for detection of 9 genes associated with β-lactam resistance. The BC-GN assay can be performed directly from positive blood cultures with 5 minutes of hands-on and 2 hours of run time per sample. A total of 397 GNB positive blood cultures were analyzed using the BC-GN assay. Of the 397 samples, 295 were simulated samples prepared by inoculating GNB into blood culture bottles, and the remaining were clinical samples from 102 patients with positive blood cultures. Aliquots of the positive blood cultures were tested by the BC-GN assay. The results of bacterial identification between the BC-GN assay and standard laboratory methods were as follows: Acinetobacter spp. (39 isolates for the BC-GN assay/39 for the standard methods), Citrobacter spp. (7/7), Escherichia coli (87/87), Klebsiella oxytoca (13/13), and Proteus spp. (11/11); Enterobacter spp. (29/30); Klebsiella pneumoniae (62/72); Pseudomonas aeruginosa (124/125); and Serratia marcescens (18/21); respectively. From the 102 clinical samples, 104 bacterial species were identified with the BC-GN assay, whereas 110 were identified with the standard methods. The BC-GN assay also detected all β-lactam resistance genes tested (233 genes), including 54 bla(CTX-M), 119 bla(IMP), 8 bla(KPC), 16 bla(NDM), 24 bla(OXA-23), 1 bla(OXA-24/40), 1 bla(OXA-48), 4 bla(OXA-58), and 6 blaVIM. The data shows that the BC-GN assay provides rapid detection of GNB and β-lactam resistance genes in positive blood cultures and has the potential to contributing to optimal patient management by earlier detection of major antimicrobial resistance genes.

  19. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  20. Impact of yeast-bacteria coinfection on the detection of Candida sp. in an automated blood culture system.

    PubMed

    Cateau, Estelle; Cognee, Anne-Sophie; Tran, Tri Cong; Vallade, Elodie; Garcia, Magali; Belaz, Sorya; Kauffmann-Lacroix, Catherine; Rodier, Marie-Helene

    2012-04-01

    Invasive candidiasis remains a major cause of morbidity and mortality. It is now well known that an early diagnosis contributes to the patients' outcome. Blood cultures, which are the first-line test in case of bloodstream infection suspicion, can be carried out using fungus-selective medium (containing antibiotics) or standard microorganism medium allowing both bacterial and fungal growth. Some patients can suffer from polymicrobial sepsis involving bacteria and yeasts, so we decided to investigate in blood cultures the influence of the presence of bacteria on fungal development. Simulated blood cultures were performed using Candida albicans or C. glabrata coincubated with Escherichia coli or Staphylococcus aureus at different concentrations. The results showed that, in a standard microorganism medium, bacterial growth could hide the fungal development. Thus, in patients at risk of invasive candidiasis, the use of a specific fungal medium could improve the diagnosis and allow an earlier efficient antifungal treatment.

  1. Parallel computation

    NASA Astrophysics Data System (ADS)

    Huberman, Bernardo A.

    1989-11-01

    This paper reviews three different aspects of parallel computation which are useful for physics. The first part deals with special architectures for parallel computing (SIMD and MIMD machines) and their differences, with examples of their uses. The second section discusses the speedup that can be achieved in parallel computation and the constraints generated by the issues of communication and synchrony. The third part describes computation by distributed networks of powerful workstations without global controls and the issues involved in understanding their behavior.

  2. Automated Video-Based Analysis of Contractility and Calcium Flux in Human-Induced Pluripotent Stem Cell-Derived Cardiomyocytes Cultured over Different Spatial Scales

    PubMed Central

    Huebsch, Nathaniel; Loskill, Peter; Mandegar, Mohammad A.; Marks, Natalie C.; Sheehan, Alice S.; Ma, Zhen; Mathur, Anurag; Nguyen, Trieu N.; Yoo, Jennie C.; Judge, Luke M.; Spencer, C. Ian; Chukka, Anand C.; Russell, Caitlin R.; So, Po-Lin

    2015-01-01

    Contractile motion is the simplest metric of cardiomyocyte health in vitro, but unbiased quantification is challenging. We describe a rapid automated method, requiring only standard video microscopy, to analyze the contractility of human-induced pluripotent stem cell-derived cardiomyocytes (iPS-CM). New algorithms for generating and filtering motion vectors combined with a newly developed isogenic iPSC line harboring genetically encoded calcium indicator, GCaMP6f, allow simultaneous user-independent measurement and analysis of the coupling between calcium flux and contractility. The relative performance of these algorithms, in terms of improving signal to noise, was tested. Applying these algorithms allowed analysis of contractility in iPS-CM cultured over multiple spatial scales from single cells to three-dimensional constructs. This open source software was validated with analysis of isoproterenol response in these cells, and can be applied in future studies comparing the drug responsiveness of iPS-CM cultured in different microenvironments in the context of tissue engineering. PMID:25333967

  3. Microbial identification and automated antibiotic susceptibility testing directly from positive blood cultures using MALDI-TOF MS and VITEK 2.

    PubMed

    Wattal, C; Oberoi, J K

    2016-01-01

    The study addresses the utility of Matrix Assisted Laser Desorption/Ionisation Time-Of-Flight mass spectrometry (MALDI-TOF MS) using VITEK MS and the VITEK 2 antimicrobial susceptibility testing (AST) system for direct identification (ID) and timely AST from positive blood culture bottles using a lysis-filtration method (LFM). Between July and December 2014, a total of 140 non-duplicate mono-microbial blood cultures were processed. An aliquot of positive blood culture broth was incubated with lysis buffer before the bacteria were filtered and washed. Micro-organisms recovered from the filter were first identified using VITEK MS and its suspension was used for direct AST by VITEK 2 once the ID was known. Direct ID and AST results were compared with classical methods using solid growth. Out of the 140 bottles tested, VITEK MS resulted in 70.7 % correct identification to the genus and/ or species level. For the 103 bottles where identification was possible, there was agreement in 97 samples (94.17 %) with classical culture. Compared to the routine method, the direct AST resulted in category agreement in 860 (96.5 %) of 891 bacteria-antimicrobial agent combinations tested. The results of direct ID and AST were available 16.1 hours before those of the standard approach on average. The combined use of VITEK MS and VITEK 2 directly on samples from positive blood culture bottles using a LFM technique can result in rapid and reliable ID and AST results in blood stream infections to result in early institution of targeted treatment. The combination of LFM and AST using VITEK 2 was found to expedite AST more reliably.

  4. Automated Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Pahle, Joseph; Brown, Nelson

    2015-01-01

    This presentation is an overview of the Automated Cooperative Trajectories project. An introduction to the phenomena of wake vortices is given, along with a summary of past research into the possibility of extracting energy from the wake by flying close parallel trajectories. Challenges and barriers to adoption of civilian automatic wake surfing technology are identified. A hardware-in-the-loop simulation is described that will support future research. Finally, a roadmap for future research and technology transition is proposed.

  5. Evaluation of the 3D BacT/ALERT automated culture system for the detection of microbial contamination of platelet concentrates.

    PubMed

    McDonald, C P; Rogers, A; Cox, M; Smith, R; Roy, A; Robbins, S; Hartley, S; Barbara, J A J; Rothenberg, S; Stutzman, L; Widders, G

    2002-10-01

    Bacterial transmission remains the major component of morbidity and mortality associated with transfusion-transmitted infections. Platelet concentrates are the most common cause of bacterial transmission. The BacT/ALERT 3D automated blood culture system has the potential to screen platelet concentrates for the presence of bacteria. Evaluation of this system was performed by spiking day 2 apheresis platelet units with individual bacterial isolates at final concentrations of 10 and 100 colony-forming units (cfu) mL-1. Fifteen organisms were used which had been cited in platelet transmission and monitoring studies. BacT/ALERT times to detection were compared with thioglycollate broth cultures, and the performance of five types of BacT/ALERT culture bottles was evaluated. Sampling was performed immediately after the inoculation of the units, and 10 replicates were performed per organism concentration for each of the five types of BacT/ALERT bottles. The mean times for the detection of these 15 organisms by BacT/ALERT, with the exception of Propionibacterium acnes, ranged from 9.1 to 48.1 h (all 10 replicates were positive). In comparison, the time range found using thioglycollate was 12.0-32.3 h (all 10 replicates were positive). P. acnes' BacT/ALERT mean detection times ranged from 89.0 to 177.6 h compared with 75.6-86.4 h for the thioglycollate broth. BacT/ALERT, with the exception of P. acnes, which has dubious clinical significance, gave equivalent or shorter detection times when compared with the thioglycollate broth system. The BacT/ALERT system detected a range of organisms at levels of 10 and 100 cfu mL-1. This study validates the BacT/ALERT microbial detection system for screening platelets. Currently, the system is the only practically viable option available for routinely screening platelet concentrates to prevent bacterial transmission.

  6. A time-to-event pharmacodynamic model describing treatment response in patients with pulmonary tuberculosis using days to positivity in automated liquid mycobacterial culture.

    PubMed

    Chigutsa, Emmanuel; Patel, Kashyap; Denti, Paolo; Visser, Marianne; Maartens, Gary; Kirkpatrick, Carl M J; McIlleron, Helen; Karlsson, Mats O

    2013-02-01

    Days to positivity in automated liquid mycobacterial culture have been shown to correlate with mycobacterial load and have been proposed as a useful biomarker for treatment responses in tuberculosis. However, there is currently no quantitative method or model to analyze the change in days to positivity with time on treatment. The objectives of this study were to describe the decline in numbers of mycobacteria in sputum collected once weekly for 8 weeks from patients on treatment for tuberculosis using days to positivity in liquid culture. One hundred forty-four patients with smear-positive pulmonary tuberculosis were recruited from a tuberculosis clinic in Cape Town, South Africa. A nonlinear mixed-effects repeated-time-to-event modeling approach was used to analyze the time-to-positivity data. A biexponential model described the decline in the estimated number of bacteria in patients' sputum samples, while a logistic model with a lag time described the growth of the bacteria in liquid culture. At baseline, the estimated number of rapidly killed bacteria is typically 41 times higher than that of those that are killed slowly. The time to kill half of the rapidly killed bacteria was about 1.8 days, while it was 39 days for slowly killed bacteria. Patients with lung cavitation had higher bacterial loads than patients without lung cavitation. The model successfully described the increase in days to positivity as treatment progressed, differentiating between bacteria that are killed rapidly and those that are killed slowly. Our model can be used to analyze similar data from studies testing new drug regimens.

  7. Spheroid formation of human thyroid cancer cells in an automated culturing system during the Shenzhou-8 Space mission.

    PubMed

    Pietsch, Jessica; Ma, Xiao; Wehland, Markus; Aleshcheva, Ganna; Schwarzwälder, Achim; Segerer, Jürgen; Birlem, Maria; Horn, Astrid; Bauer, Johann; Infanger, Manfred; Grimm, Daniela

    2013-10-01

    Human follicular thyroid cancer cells were cultured in Space to investigate the impact of microgravity on 3D growth. For this purpose, we designed and constructed a cell container that can endure enhanced physical forces, is connected to fluid storage chambers, performs media changes and cell harvesting automatically and supports cell viability. The container consists of a cell suspension chamber, two reserve tanks for medium and fixative and a pump for fluid exchange. The selected materials proved durable, non-cytotoxic, and did not inactivate RNAlater. This container was operated automatically during the unmanned Shenzhou-8 Space mission. FTC-133 human follicular thyroid cancer cells were cultured in Space for 10 days. Culture medium was exchanged after 5 days in Space and the cells were fixed after 10 days. The experiment revealed a scaffold-free formation of extraordinary large three-dimensional aggregates by thyroid cancer cells with altered expression of EGF and CTGF genes under real microgravity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Automated Image Processing for Spatially Resolved Analysis of Lipid Droplets in Cultured 3T3-L1 Adipocytes

    PubMed Central

    Sims, James Kenneth; Rohr, Brian; Miller, Eric

    2015-01-01

    Cellular hypertrophy of adipose tissue underlies many of the proposed proinflammatory mechanisms for obesity-related diseases. Adipose hypertrophy results from an accumulation of esterified lipids (triglycerides) into membrane-enclosed intracellular lipid droplets (LDs). The coupling between adipocyte metabolism and LD morphology could be exploited to investigate biochemical regulation of lipid pathways by monitoring the dynamics of LDs. This article describes an image processing method to identify LDs based on several distinctive optical and morphological characteristics of these cellular bodies as they appear under bright-field. The algorithm was developed against images of 3T3-L1 preadipocyte cultures induced to differentiate into adipocytes. We show that the calculated lipid volumes are in excellent agreement with enzymatic assay data on total intracellular triglyceride content. We also demonstrate that the image processing method can efficiently characterize the highly heterogeneous spatial distribution of LDs in a culture by showing that differentiation occurs in distinct clusters separated by regions of nearly undifferentiated cells. Prospectively, the LD detection method described in this work could be applied to time-lapse data collected with simple visible light microscopy equipment to quantitatively investigate LD dynamics. PMID:25390760

  9. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  10. Comparison of automated BAX PCR and standard culture methods for detection of Listeria monocytogenes in blue Crabmeat (Callinectus sapidus) and blue crab processing plants.

    PubMed

    Pagadala, Sivaranjani; Parveen, Salina; Schwarz, Jurgen G; Rippen, Thomas; Luchansky, John B

    2011-11-01

    This study compared the automated BAX PCR with the standard culture method (SCM) to detect Listeria monocytogenes in blue crab processing plants. Raw crabs, crabmeat, and environmental sponge samples were collected monthly from seven processing plants during the plant operating season, May through November 2006. For detection of L. monocytogenes in raw crabs and crabmeat, enrichment was performed in Listeria enrichment broth, whereas for environmental samples, demi-Fraser broth was used, and then plating on both Oxford agar and L. monocytogenes plating medium was done. Enriched samples were also analyzed by BAX PCR. A total of 960 samples were examined; 59 were positive by BAX PCR and 43 by SCM. Overall, there was no significant difference (P ≤ 0.05) between the methods for detecting the presence of L. monocytogenes in samples collected from crab processing plants. Twenty-two and 18 raw crab samples were positive for L. monocytogenes by SCM and BAX PCR, respectively. Twenty and 32 environmental samples were positive for L. monocytogenes by SCM and BAX PCR, respectively, whereas only one and nine finished products were positive. The sensitivities of BAX PCR for detecting L. monocytogenes in raw crabs, crabmeat, and environmental samples were 59.1, 100, and 60%, respectively. The results of this study indicate that BAX PCR is as sensitive as SCM for detecting L. monocytogenes in crabmeat, but more sensitive than SCM for detecting this bacterium in raw crabs and environmental samples.

  11. Wire-Guide Manipulator For Automated Welding

    NASA Technical Reports Server (NTRS)

    Morris, Tim; White, Kevin; Gordon, Steve; Emerich, Dave; Richardson, Dave; Faulkner, Mike; Stafford, Dave; Mccutcheon, Kim; Neal, Ken; Milly, Pete

    1994-01-01

    Compact motor drive positions guide for welding filler wire. Drive part of automated wire feeder in partly or fully automated welding system. Drive unit contains three parallel subunits. Rotations of lead screws in three subunits coordinated to obtain desired motions in three degrees of freedom. Suitable for both variable-polarity plasma arc welding and gas/tungsten arc welding.

  12. Wire-Guide Manipulator For Automated Welding

    NASA Technical Reports Server (NTRS)

    Morris, Tim; White, Kevin; Gordon, Steve; Emerich, Dave; Richardson, Dave; Faulkner, Mike; Stafford, Dave; Mccutcheon, Kim; Neal, Ken; Milly, Pete

    1994-01-01

    Compact motor drive positions guide for welding filler wire. Drive part of automated wire feeder in partly or fully automated welding system. Drive unit contains three parallel subunits. Rotations of lead screws in three subunits coordinated to obtain desired motions in three degrees of freedom. Suitable for both variable-polarity plasma arc welding and gas/tungsten arc welding.

  13. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  14. Culture.

    PubMed

    Smith, Timothy B; Rodríguez, Melanie Domenech; Bernal, Guillermo

    2011-02-01

    This article summarizes the definitions, means, and research of adapting psychotherapy to clients' cultural backgrounds. We begin by reviewing the prevailing definitions of cultural adaptation and providing a clinical example. We present an original meta-analysis of 65 experimental and quasi-experimental studies involving 8,620 participants. The omnibus effect size of d = .46 indicates that treatments specifically adapted for clients of color were moderately more effective with that clientele than traditional treatments. The most effective treatments tended to be those with greater numbers of cultural adaptations. Mental health services targeted to a specific cultural group were several times more effective than those provided to clients from a variety of cultural backgrounds. We recommend a series of research-supported therapeutic practices that account for clients' culture, with culture-specific treatments being more effective than generally culture-sensitive treatments. © 2010 Wiley Periodicals, Inc.

  15. Automated Solar-Array Assembly

    NASA Technical Reports Server (NTRS)

    Soffa, A.; Bycer, M.

    1982-01-01

    Large arrays are rapidly assembled from individual solar cells by automated production line developed for NASA's Jet Propulsion Laboratory. Apparatus positions cells within array, attaches interconnection tabs, applies solder flux, and solders interconnections. Cells are placed in either straight or staggered configurations and may be connected either in series or in parallel. Are attached at rate of one every 5 seconds.

  16. Automated calculation and simulation systems

    NASA Astrophysics Data System (ADS)

    Ohl, Thorsten

    2003-04-01

    I briefly summarize the parallel sessions on Automated Calculation and Simulation Systems for high-energy particle physics phenomenology at ACAT 2002 (Moscow State University, June 2002) and present a short overview over the current status of the field and try to identify the important trends.

  17. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  18. Automated High Throughput Drug Target Crystallography

    SciTech Connect

    Rupp, B

    2005-02-18

    The molecular structures of drug target proteins and receptors form the basis for 'rational' or structure guided drug design. The majority of target structures are experimentally determined by protein X-ray crystallography, which as evolved into a highly automated, high throughput drug discovery and screening tool. Process automation has accelerated tasks from parallel protein expression, fully automated crystallization, and rapid data collection to highly efficient structure determination methods. A thoroughly designed automation technology platform supported by a powerful informatics infrastructure forms the basis for optimal workflow implementation and the data mining and analysis tools to generate new leads from experimental protein drug target structures.

  19. Automation of antimicrobial activity screening.

    PubMed

    Forry, Samuel P; Madonna, Megan C; López-Pérez, Daneli; Lin, Nancy J; Pasco, Madeleine D

    2016-03-01

    Manual and automated methods were compared for routine screening of compounds for antimicrobial activity. Automation generally accelerated assays and required less user intervention while producing comparable results. Automated protocols were validated for planktonic, biofilm, and agar cultures of the oral microbe Streptococcus mutans that is commonly associated with tooth decay. Toxicity assays for the known antimicrobial compound cetylpyridinium chloride (CPC) were validated against planktonic, biofilm forming, and 24 h biofilm culture conditions, and several commonly reported toxicity/antimicrobial activity measures were evaluated: the 50 % inhibitory concentration (IC50), the minimum inhibitory concentration (MIC), and the minimum bactericidal concentration (MBC). Using automated methods, three halide salts of cetylpyridinium (CPC, CPB, CPI) were rapidly screened with no detectable effect of the counter ion on antimicrobial activity.

  20. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  1. Automation pilot

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An important concept of the Action Information Management System (AIMS) approach is to evaluate office automation technology in the context of hands on use by technical program managers in the conduct of human acceptance difficulties which may accompany the transition to a significantly changing work environment. The improved productivity and communications which result from application of office automation technology are already well established for general office environments, but benefits unique to NASA are anticipated and these will be explored in detail.

  2. Investigating the feasibility of scale up and automation of human induced pluripotent stem cells cultured in aggregates in feeder free conditions☆

    PubMed Central

    Soares, Filipa A.C.; Chandra, Amit; Thomas, Robert J.; Pedersen, Roger A.; Vallier, Ludovic; Williams, David J.

    2014-01-01

    The transfer of a laboratory process into a manufacturing facility is one of the most critical steps required for the large scale production of cell-based therapy products. This study describes the first published protocol for scalable automated expansion of human induced pluripotent stem cell lines growing in aggregates in feeder-free and chemically defined medium. Cells were successfully transferred between different sites representative of research and manufacturing settings; and passaged manually and using the CompacT SelecT automation platform. Modified protocols were developed for the automated system and the management of cells aggregates (clumps) was identified as the critical step. Cellular morphology, pluripotency gene expression and differentiation into the three germ layers have been used compare the outcomes of manual and automated processes. PMID:24440272

  3. Automated Microbial Metabolism Laboratory

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Development of the automated microbial metabolism laboratory (AMML) concept is reported. The focus of effort of AMML was on the advanced labeled release experiment. Labeled substrates, inhibitors, and temperatures were investigated to establish a comparative biochemical profile. Profiles at three time intervals on soil and pure cultures of bacteria isolated from soil were prepared to establish a complete library. The development of a strategy for the return of a soil sample from Mars is also reported.

  4. Culture.

    ERIC Educational Resources Information Center

    1997

    Twelve conference papers on cultural aspects of second language instruction include: "Towards True Multiculturalism: Ideas for Teachers" (Brian McVeigh); Comparing Cultures Through Critical Thinking: Development and Interpretations of Meaningful Observations" (Laurel D. Kamada); "Authority and Individualism in Japan and the…

  5. Robotic platform for parallelized cultivation and monitoring of microbial growth parameters in microwell plates.

    PubMed

    Knepper, Andreas; Heiser, Michael; Glauche, Florian; Neubauer, Peter

    2014-12-01

    The enormous variation possibilities of bioprocesses challenge process development to fix a commercial process with respect to costs and time. Although some cultivation systems and some devices for unit operations combine the latest technology on miniaturization, parallelization, and sensing, the degree of automation in upstream and downstream bioprocess development is still limited to single steps. We aim to face this challenge by an interdisciplinary approach to significantly shorten development times and costs. As a first step, we scaled down analytical assays to the microliter scale and created automated procedures for starting the cultivation and monitoring the optical density (OD), pH, concentrations of glucose and acetate in the culture medium, and product formation in fed-batch cultures in the 96-well format. Then, the separate measurements of pH, OD, and concentrations of acetate and glucose were combined to one method. This method enables automated process monitoring at dedicated intervals (e.g., also during the night). By this approach, we managed to increase the information content of cultivations in 96-microwell plates, thus turning them into a suitable tool for high-throughput bioprocess development. Here, we present the flowcharts as well as cultivation data of our automation approach.

  6. Office automation.

    PubMed

    Arenson, R L

    1986-03-01

    By now, the term "office automation" should have more meaning for those readers who are not intimately familiar with the subject. Not all of the preceding material pertains to every department or practice, but certainly, word processing and simple telephone management are key items. The size and complexity of the organization will dictate the usefulness of electronic mail and calendar management, and the individual radiologist's personal needs and habits will determine the usefulness of the home computer. Perhaps the most important ingredient for success in the office automation arena relates to the ability to integrate information from various systems in a simple and flexible manner. Unfortunately, this is perhaps the one area that most office automation systems have ignored or handled poorly. In the personal computer world, there has been much emphasis recently on integration of packages such as spreadsheet, database management, word processing, graphics, time management, and communications. This same philosophy of integration has been applied to a few office automation systems, but these are generally vendor-specific and do not allow for a mixture of foreign subsystems. During the next few years, it is likely that a few vendors will emerge as dominant in this integrated office automation field and will stress simplicity and flexibility as major components.

  7. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  8. Parallel reactor systems for bioprocess development.

    PubMed

    Weuster-Botz, Dirk

    2005-01-01

    Controlled parallel bioreactor systems allow fed-batch operation at early stages of process development. The characteristics of shaken bioreactors operated in parallel (shake flask, microtiter plate), sparged bioreactors (small-scale bubble column) and stirred bioreactors (stirred-tank, stirred column) are briefly summarized. Parallel fed-batch operation is achieved with an intermittent feeding and pH-control system for up to 16 bioreactors operated in parallel on a scale of 100 ml. Examples of the scale-up and scale-down of pH-controlled microbial fed-batch processes demonstrate that controlled parallel reactor systems can result in more effective bioprocess development. Future developments are also outlined, including units of 48 parallel stirred-tank reactors with individual pH- and pO2-controls and automation as well as liquid handling system, operated on a scale of ml.

  9. Habitat automation

    NASA Technical Reports Server (NTRS)

    Swab, Rodney E.

    1992-01-01

    A habitat, on either the surface of the Moon or Mars, will be designed and built with the proven technologies of that day. These technologies will be mature and readily available to the habitat designer. We believe an acceleration of the normal pace of automation would allow a habitat to be safer and more easily maintained than would be the case otherwise. This document examines the operation of a habitat and describes elements of that operation which may benefit from an increased use of automation. Research topics within the automation realm are then defined and discussed with respect to the role they can have in the design of the habitat. Problems associated with the integration of advanced technologies into real-world projects at NASA are also addressed.

  10. Automating Finance

    ERIC Educational Resources Information Center

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  11. Simplified Automated Image Analysis for Detection and Phenotyping of Mycobacterium tuberculosis on Porous Supports by Monitoring Growing Microcolonies

    PubMed Central

    den Hertog, Alice L.; Visser, Dennis W.; Ingham, Colin J.; Fey, Frank H. A. G.; Klatser, Paul R.; Anthony, Richard M.

    2010-01-01

    Background Even with the advent of nucleic acid (NA) amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS), as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. Methods Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO) supports. Repeated imaging during colony growth greatly simplifies “computer vision” and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. Significance Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation. PMID:20544033

  12. Cultural

    Treesearch

    Wilbur F. LaPage

    1971-01-01

    A critical look at outdoor recreation research and some underlying premises. The author focuses on the concept of culture as communication and how it influences our perception of problems and our search for solutions. Both outdoor recreation and science are viewed as subcultures that have their own bodies of mythology, making recreation problems more difficult to...

  13. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  14. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  15. Protocol for Automated Zooplankton Analysis

    DTIC Science & Technology

    2010-01-01

    Protocol for Automated Zooplankton Analysis LIST OF FIGURES Figure 1. Photograph of the SensoPlate• Glass Bottom Cell Culture Plate 5 Figure A-l. File...Artemia franciscana) and rotifers {Brachionus plicatilis and B. calyciflorus). Initial work was conducted with homogeneous monocultures with little to...resistant materials. Based on these criteria, NRL used the SensoPlate• Glass Bottom Cell Culture Plates (Item # 692892; Greiner Bio-One, Monroe, NC

  16. Continuous-flow automation of the Lactobacillus casei serum folate assay.

    PubMed Central

    Tennant, G B

    1977-01-01

    A method is described for the continuous-flow automation of the serum folate assay using Lactobacillus casei. The total incubation period is approximately four hours. The growth response of the organism to folate is estimated by measuring the rate of reduction of 2,3,5-triphenyl tetrazolium chloride (TTC). A simple continuous culture apparatus is used to grow the inoculum. Supplementation of the assay medium is necessary to obtain parallel results. A statistical assessment shows a favourable comparison with the whole-serum tube assay using a chloramphenicol resistant strain of L. casei. The method is less sensitive to inhibitory substances than the tube assay. PMID:415069

  17. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  18. Automating the multiprocessing environment

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    An approach to automate the programming and operation of tree-structured networks of multiprocessor systems is discussed. A conceptual, knowledge-based operating environment is presented, and requirements for two major technology elements are identified as follows: (1) An intelligent information translator is proposed for implementating information transfer between dissimilar hardware and software, thereby enabling independent and modular development of future systems and promoting a language-independence of codes and information; (2) A resident system activity manager, which recognizes the systems capabilities and monitors the status of all systems within the environment, is proposed for integrating dissimilar systems into effective parallel processing resources to optimally meet user needs. Finally, key computational capabilities which must be provided before the environment can be realized are identified.

  19. Automating CPM-GOMS

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2002-01-01

    CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the

  20. Automating CPM-GOMS

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2002-01-01

    CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the

  1. Agile automated vision

    NASA Astrophysics Data System (ADS)

    Fandrich, Juergen; Schmitt, Lorenz A.

    1994-11-01

    The microelectronic industry is a protagonist in driving automated vision to new paradigms. Today semiconductor manufacturers use vision systems quite frequently in their fabs in the front-end process. In fact, the process depends on reliable image processing systems. In the back-end process, where ICs are assembled and packaged, today vision systems are only partly used. But in the next years automated vision will become compulsory for the back-end process as well. Vision will be fully integrated into every IC package production machine to increase yields and reduce costs. Modem high-speed material processing requires dedicated and efficient concepts in image processing. But the integration of various equipment in a production plant leads to unifying handling of data flow and interfaces. Only agile vision systems can act with these contradictions: fast, reliable, adaptable, scalable and comprehensive. A powerful hardware platform is a unneglectable requirement for the use of advanced and reliable, but unfortunately computing intensive image processing algorithms. The massively parallel SIMD hardware product LANTERN/VME supplies a powerful platform for existing and new functionality. LANTERN/VME is used with a new optical sensor for IC package lead inspection. This is done in 3D, including horizontal and coplanarity inspection. The appropriate software is designed for lead inspection, alignment and control tasks in IC package production and handling equipment, like Trim&Form, Tape&Reel and Pick&Place machines.

  2. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  3. Multiplex Identification of Gram-Positive Bacteria and Resistance Determinants Directly from Positive Blood Culture Broths: Evaluation of an Automated Microarray-Based Nucleic Acid Test

    PubMed Central

    Buchan, Blake W.; Ginocchio, Christine C.; Manii, Ryhana; Cavagnolo, Robert; Pancholi, Preeti; Swyers, Lettie; Thomson, Richard B.; Anderson, Christopher; Kaul, Karen; Ledeboer, Nathan A.

    2013-01-01

    Background A multicenter study was conducted to evaluate the diagnostic accuracy (sensitivity and specificity) of the Verigene Gram-Positive Blood Culture Test (BC-GP) test to identify 12 Gram-positive bacterial gene targets and three genetic resistance determinants directly from positive blood culture broths containing Gram-positive bacteria. Methods and Findings 1,252 blood cultures containing Gram-positive bacteria were prospectively collected and tested at five clinical centers between April, 2011 and January, 2012. An additional 387 contrived blood cultures containing uncommon targets (e.g., Listeria spp., S. lugdunensis, vanB-positive Enterococci) were included to fully evaluate the performance of the BC-GP test. Sensitivity and specificity for the 12 specific genus or species targets identified by the BC-GP test ranged from 92.6%–100% and 95.4%–100%, respectively. Identification of the mecA gene in 599 cultures containing S. aureus or S. epidermidis was 98.6% sensitive and 94.3% specific compared to cefoxitin disk method. Identification of the vanA gene in 81 cultures containing Enterococcus faecium or E. faecalis was 100% sensitive and specific. Approximately 7.5% (87/1,157) of single-organism cultures contained Gram-positive bacteria not present on the BC-GP test panel. In 95 cultures containing multiple organisms the BC-GP test was in 71.6% (68/95) agreement with culture results. Retrospective analysis of 107 separate blood cultures demonstrated that identification of methicillin resistant S. aureus and vancomycin resistant Enterococcus spp. was completed an average of 41.8 to 42.4 h earlier using the BC-GP test compared to routine culture methods. The BC-GP test was unable to assign mecA to a specific organism in cultures containing more than one Staphylococcus isolate and does not identify common blood culture contaminants such as Micrococcus, Corynebacterium, and Bacillus. Conclusions The BC-GP test is a multiplex test capable of detecting most

  4. Low-cost, flexible polymer arrays for long-term neuronal culture.

    PubMed

    Hogan, N Catherine; Talei-Franzesi, Giovanni; Abudayyeh, Omar; Taberner, Andrew; Hunter, Ian

    2012-01-01

    Conducting polymers are promising materials for fabrication of microelectrode arrays for both neural stimulation and recording. Our ability to engineer the morphology and composition of polypyrrole together with its suitability as an electrically addressable tissue/cell substrate have been used to develop an inexpensive, disposable three-dimensional polymeric array for use in neuronal culture and drug discovery. These arrays could be interfaced with a fixed, parallel stimulation and optical imaging system, amenable to automated handling and data analysis.

  5. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  6. The RABiT: a rapid automated biodosimetry tool for radiological triage. II. Technological developments.

    PubMed

    Garty, Guy; Chen, Youhua; Turner, Helen C; Zhang, Jian; Lyulko, Oleksandra V; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Lawrence Yao, Y; Brenner, David J

    2011-08-01

    Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. The RABiT analyses fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cut-off dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day.

  7. The RABiT: A Rapid Automated Biodosimetry Tool For Radiological Triage. II. Technological Developments

    PubMed Central

    Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.

    2011-01-01

    Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703

  8. First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    NASA Technical Reports Server (NTRS)

    Griffin, Sandy (Editor)

    1987-01-01

    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.

  9. Detection of Salmonella spp. with the BACTEC 9240 Automated Blood Culture System in 2008 - 2014 in Southern Iran (Shiraz): Biogrouping, MIC, and Antimicrobial Susceptibility Profiles of Isolates

    PubMed Central

    Anvarinejad, Mojtaba; Pouladfar, Gholam Reza; Pourabbas, Bahman; Amin Shahidi, Maneli; Rafaatpour, Noroddin; Dehyadegari, Mohammad Ali; Abbasi, Pejman; Mardaneh, Jalal

    2016-01-01

    Background Human salmonellosis continues to be a major international problem, in terms of both morbidity and economic losses. The antibiotic resistance of Salmonella is an increasing public health emergency, since infections from resistant bacteria are more difficult and costly to treat. Objectives The aims of the present study were to investigate the isolation of Salmonella spp. with the BACTEC automated system from blood samples during 2008 - 2014 in southern Iran (Shiraz). Detection of subspecies, biogrouping, and antimicrobial susceptibility testing by the disc diffusion and agar dilution methods were performed. Patients and Methods A total of 19 Salmonella spp. were consecutively isolated using BACTEC from blood samples of patients between 2008 and 2014 in Shiraz, Iran. The isolates were identified as Salmonella, based on biochemical tests embedded in the API-20E system. In order to characterize the biogroups and subspecies, biochemical testing was performed. Susceptibility testing (disc diffusion and agar dilution) and extended-spectrum β-lactamase (ESBL) detection were performed according to the clinical and laboratory standards institute (CLSI) guidelines. Results Of the total 19 Salmonella spp. isolates recovered by the BACTEC automated system, all belonged to the Salmonella enterica subsp. houtenae. Five isolates (26.5%) were resistant to azithromycin. Six (31.5%) isolates with the disc diffusion method and five (26.3%) with the agar dilution method displayed resistance to nalidixic acid (minimum inhibitory concentration [MIC] > 32 μg/mL). All nalidixic acid-resistant isolates were also ciprofloxacin-sensitive. All isolates were ESBL-negative. Twenty-one percent of isolates were found to be resistant to chloramphenicol (MIC ≥ 32 μg/mL), and 16% were resistant to ampicillin (MIC ≥ 32 μg/mL). Conclusions The results indicate that multidrug-resistant (MDR) strains of Salmonella are increasing in number, and fewer antibiotics may be useful for

  10. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  11. Contactless automated manipulation of mesoscale objects using opto-fluidic actuation and visual servoing.

    PubMed

    Vela, Emir; Hafez, Moustapha; Régnier, Stéphane

    2014-05-01

    This work describes an automated opto-fluidic system for parallel non-contact manipulation of microcomponents. The strong dynamics of laser-driven thermocapillary flows were used to drag microcomponents at high speeds. High-speed flows allowed to manipulate micro-objects in a parallel manner only using a single laser and a mirror scanner. An automated process was implemented using visual servoing with a high-speed camera in order to achieve accurately parallel manipulation. Automated manipulation of two glass beads of 30 up to 300 μm in diameter moving in parallel at speeds in the range of mm/s was demonstrated.

  12. Parallelism in integrated fluidic circuits

    NASA Astrophysics Data System (ADS)

    Bousse, Luc J.; Kopf-Sill, Anne R.; Parce, J. W.

    1998-04-01

    Many research groups around the world are working on integrated microfluidics. The goal of these projects is to automate and integrate the handling of liquid samples and reagents for measurement and assay procedures in chemistry and biology. Ultimately, it is hoped that this will lead to a revolution in chemical and biological procedures similar to that caused in electronics by the invention of the integrated circuit. The optimal size scale of channels for liquid flow is determined by basic constraints to be somewhere between 10 and 100 micrometers . In larger channels, mixing by diffusion takes too long; in smaller channels, the number of molecules present is so low it makes detection difficult. At Caliper, we are making fluidic systems in glass chips with channels in this size range, based on electroosmotic flow, and fluorescence detection. One application of this technology is rapid assays for drug screening, such as enzyme assays and binding assays. A further challenge in this area is to perform multiple functions on a chip in parallel, without a large increase in the number of inputs and outputs. A first step in this direction is a fluidic serial-to-parallel converter. Fluidic circuits will be shown with the ability to distribute an incoming serial sample stream to multiple parallel channels.

  13. Universal protocol for the rapid automated detection of carbapenem-resistant Gram-negative bacilli directly from blood cultures by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF/MS).

    PubMed

    Oviaño, Marina; Sparbier, Katrin; Barba, Maria José; Kostrzewa, Markus; Bou, Germán

    2016-12-01

    Detection of carbapenemase-producing bacteria directly from blood cultures is a major challenge, as patients with bacteraemia are critically ill. Early detection can be helpful for selection of the most appropriate antibiotic therapy as well as adequate control of outbreaks. In the current study, a novel matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF)-based method was developed for the rapid, automated detection of carbapenemase-producing Enterobacteriaceae, Pseudomonas aeruginosa and Acinetobacter baumannii directly from blood cultures. Carbapenemase activity was determined in 30 min by measuring hydrolysis of imipenem (0.31 mg/mL) in blood cultures spiked with a series of 119 previously characterised isolates, 81 of which carried a carbapenemase enzyme (10 blaKPC, 10 blaVIM, 10 blaNDM, 10 blaIMP, 26 blaOXA-48-type, 9 blaOXA-23, 1 blaOXA-237, 3 blaOXA-24 and 2 blaOXA-58). Twenty blood cultures obtained from bacteraemic patients carrying blaOXA-48-producing isolates were also analysed using the same protocol. Analysis was performed using MALDI-TOF Biotyper(®) Compass software, which automatically provides a result of sensitivity or resistance, calculated as the logRQ or ratio of hydrolysis of the antibiotic. This assay is simple to perform, inexpensive, time saving, universal for Gram-negative bacilli, and highly reliable (overall sensitivity and specificity of 98% and 100%, respectively). Moreover, the protocol could be established as a standardised method in clinical laboratories as it does not require specialised training in mass spectrometry. Copyright © 2016 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  14. Rapid detection of Gram-negative bacteria and their drug resistance genes from positive blood cultures using an automated microarray assay.

    PubMed

    Han, Eunhee; Park, Dong-Jin; Kim, Yukyoung; Yu, Jin Kyung; Park, Kang Gyun; Park, Yeon-Joon

    2015-03-01

    We evaluated the performance of the Verigene Gram-negative blood culture (BC-GN) assay (CE-IVD version) for identification of Gram-negative (GN) bacteria and detection of resistance genes. A total of 163 GN organisms (72 characterized strains and 91 clinical isolates from 86 patients) were tested; among the clinical isolates, 86 (94.5%) isolates were included in the BC-GN panel. For identification, the agreement was 98.6% (146/148, 95% confidence interval [CI], 92.1-100) and 70% (7/10, 95% CI, 53.5-100) for monomicrobial and polymicrobial cultures, respectively. Of the 48 resistance genes harbored by 43 characterized strains, all were correctly detected. Of the 19 clinical isolates harboring resistance genes, 1 CTX-M-producing Escherichia coli isolated in polymicrobial culture was not detected. Overall, BC-GN assay provides acceptable accuracy for rapid identification of Gram-negative bacteria and detection of resistance genes, compared with routine laboratory methods despite that it has limitations in the number of genus/species and resistance gene included in the panel and it shows lower sensitivity in polymicrobial cultures. Copyright © 2015. Published by Elsevier Inc.

  15. Three-dimensional growth of human endothelial cells in an automated cell culture experiment container during the SpaceX CRS-8 ISS space mission - The SPHEROIDS project.

    PubMed

    Pietsch, Jessica; Gass, Samuel; Nebuloni, Stefano; Echegoyen, David; Riwaldt, Stefan; Baake, Christin; Bauer, Johann; Corydon, Thomas J; Egli, Marcel; Infanger, Manfred; Grimm, Daniela

    2017-04-01

    Human endothelial cells (ECs) were sent to the International Space Station (ISS) to determine the impact of microgravity on the formation of three-dimensional structures. For this project, an automatic experiment unit (EU) was designed allowing cell culture in space. In order to enable a safe cell culture, cell nourishment and fixation after a pre-programmed timeframe, the materials used for construction of the EUs were tested in regard to their biocompatibility. These tests revealed a high biocompatibility for all parts of the EUs, which were in contact with the cells or the medium used. Most importantly, we found polyether ether ketones for surrounding the incubation chamber, which kept cellular viability above 80% and allowed the cells to adhere as long as they were exposed to normal gravity. After assembling the EU the ECs were cultured therein, where they showed good cell viability at least for 14 days. In addition, the functionality of the automatic medium exchange, and fixation procedures were confirmed. Two days before launch, the ECs were cultured in the EUs, which were afterwards mounted on the SpaceX CRS-8 rocket. 5 and 12 days after launch the cells were fixed. Subsequent analyses revealed a scaffold-free formation of spheroids in space.

  16. Comparison of automated BAX polymerase chain reaction and standard culture methods for detection of Listeria monocyogenes in blue crab meat (Callinectus sapidus) and blue crab processing plants

    USDA-ARS?s Scientific Manuscript database

    This study compared the BAX Polymerase Chain Reaction method (BAX PCR) with the Standard Culture Method (SCM) for detection of L. monocytogenes in blue crab meat and crab processing plants. The aim of this study was to address this data gap. Raw crabs, finished products and environmental sponge samp...

  17. Parallel Education and Defining the Fourth Sector.

    ERIC Educational Resources Information Center

    Chessell, Diana

    1996-01-01

    Parallel to the primary, secondary, postsecondary, and adult/community education sectors is education not associated with formal programs--learning in arts and cultural sites. The emergence of cultural and educational tourism is an opportunity for adult/community education to define itself by extending lifelong learning opportunities into parallel…

  18. Automation tools for flexible aircraft maintenance.

    SciTech Connect

    Prentice, William J.; Drotning, William D.; Watterberg, Peter A.; Loucks, Clifford S.; Kozlowski, David M.

    2003-11-01

    This report summarizes the accomplishments of the Laboratory Directed Research and Development (LDRD) project 26546 at Sandia, during the period FY01 through FY03. The project team visited four DoD depots that support extensive aircraft maintenance in order to understand critical needs for automation, and to identify maintenance processes for potential automation or integration opportunities. From the visits, the team identified technology needs and application issues, as well as non-technical drivers that influence the application of automation in depot maintenance of aircraft. Software tools for automation facility design analysis were developed, improved, extended, and integrated to encompass greater breadth for eventual application as a generalized design tool. The design tools for automated path planning and path generation have been enhanced to incorporate those complex robot systems with redundant joint configurations, which are likely candidate designs for a complex aircraft maintenance facility. A prototype force-controlled actively compliant end-effector was designed and developed based on a parallel kinematic mechanism design. This device was developed for demonstration of surface finishing, one of many in-contact operations performed during aircraft maintenance. This end-effector tool was positioned along the workpiece by a robot manipulator, programmed for operation by the automated planning tools integrated for this project. Together, the hardware and software tools demonstrate many of the technologies required for flexible automation in a maintenance facility.

  19. Automated Coal-Mining System

    NASA Technical Reports Server (NTRS)

    Gangal, M. D.; Isenberg, L.; Lewis, E. V.

    1985-01-01

    Proposed system offers safety and large return on investment. System, operating by year 2000, employs machines and processes based on proven principles. According to concept, line of parallel machines, connected in groups of four to service modules, attacks face of coal seam. High-pressure water jets and central auger on each machine break face. Jaws scoop up coal chunks, and auger grinds them and forces fragments into slurry-transport system. Slurry pumped through pipeline to point of use. Concept for highly automated coal-mining system increases productivity, makes mining safer, and protects health of mine workers.

  20. Compact, Automated, Frequency-Agile Microspectrofluorimeter

    NASA Technical Reports Server (NTRS)

    Fernandez, Salvador M.; Guignon, Ernest F.

    1995-01-01

    Compact, reliable, rugged, automated cell-culture and frequency-agile microspectrofluorimetric apparatus developed to perform experiments involving photometric imaging observations of single live cells. In original application, apparatus operates mostly unattended aboard spacecraft; potential terrestrial applications include automated or semiautomated diagnosis of pathological tissues in clinical laboratories, biomedical instrumentation, monitoring of biological process streams, and portable instrumentation for testing biological conditions in various environments. Offers obvious advantages over present laboratory instrumentation.

  1. Compact, Automated, Frequency-Agile Microspectrofluorimeter

    NASA Technical Reports Server (NTRS)

    Fernandez, Salvador M.; Guignon, Ernest F.

    1995-01-01

    Compact, reliable, rugged, automated cell-culture and frequency-agile microspectrofluorimetric apparatus developed to perform experiments involving photometric imaging observations of single live cells. In original application, apparatus operates mostly unattended aboard spacecraft; potential terrestrial applications include automated or semiautomated diagnosis of pathological tissues in clinical laboratories, biomedical instrumentation, monitoring of biological process streams, and portable instrumentation for testing biological conditions in various environments. Offers obvious advantages over present laboratory instrumentation.

  2. A novel low-cost method for Mycobacterium avium subsp. paratuberculosis DNA extraction from an automated broth culture system for real-time PCR analysis.

    PubMed

    Salgado, Miguel; Verdugo, Cristobal; Heuer, Cord; Castillo, Pedro; Zamorano, Patricia

    2014-01-01

    PCR is a highly accurate technique for confirming the presence of Mycobacterium avium subsp. paratuberculosis (Map) in broth culture. In this study, a simple, efficient, and low-cost method of harvesting DNA from Map cultured in liquid medium was developed. The proposed protocol (Universidad Austral de Chile [UACH]) was evaluated by comparing its performance to that of two traditional techniques (a QIAamp DNA Stool Mini Kit and cethyltrimethylammonium bromide [CTAB] method). The results were statistically assessed by agreement analysis for which differences in the number of cycles to positive (CP) were compared by Student's t-test for paired samples and regression analysis. Twelve out of 104 fecal pools cultured were positive. The final PCR results for 11 samples analyzed with the QIAamp and UACH methods or ones examined with the QIAamp and CTAB methods were in agreement. Complete (100%) agreement was observed between data from the CTAB and UACH methods. CP values for the UACH and CTAB techniques were not significantly different, while the UACH method yielded significantly lower CP values compared to the QIAamp kit. The proposed extraction method combines reliability and efficiency with simplicity and lower cost.

  3. Parallel computation of Gaussian processes

    NASA Astrophysics Data System (ADS)

    Preuss, R.; von Toussaint, U.

    2017-06-01

    Within the Bayesian framework we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances - being indicators for the quality of the fit - as the utility function we established an optimized and automated sequential parameter selection procedure. However, often it is also desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time). The paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution.

  4. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  5. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  6. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  7. Introduction to parallel programming

    SciTech Connect

    Brawer, S. )

    1989-01-01

    This book describes parallel programming and all the basic concepts illustrated by examples in a simplified FORTRAN. Concepts covered include: The parallel programming model; The creation of multiple processes; Memory sharing; Scheduling; Data dependencies. In addition, a number of parallelized applications are presented, including a discrete-time, discrete-event simulator, numerical integration, Gaussian elimination, and parallelized versions of the traveling salesman problem and the exploration of a maze.

  8. Automated reduction of instantaneous flow field images

    NASA Technical Reports Server (NTRS)

    Reynolds, G. A.; Short, M.; Whiffen, M. C.

    1987-01-01

    An automated data reduction system for the analysis of interference fringe patterns obtained using the particle image velocimetry technique is described. This system is based on digital image processing techniques that have provided the flexibility and speed needed to obtain more complete automation of the data reduction process. As approached here, this process includes scanning/searching for data on the photographic record, recognition of fringe patterns of sufficient quality, and, finally, analysis of these fringes to determine a local measure of the velocity magnitude and direction. The fringe analysis as well as the fringe image recognition are based on full frame autocorrelation techniques using parallel processing capabilities.

  9. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  10. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  11. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  12. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  13. Operations automation

    NASA Technical Reports Server (NTRS)

    Boreham, Charles Thomas

    1994-01-01

    This is truly the era of 'faster-better-cheaper' at the National Aeronautics and Space Administration/Jet Propulsion Laboratory (NASA/JPL). To continue JPL's primary mission of building and operating interplanetary spacecraft, all possible avenues are being explored in the search for better value for each dollar spent. A significant cost factor in any mission is the amount of manpower required to receive, decode, decommutate, and distribute spacecraft engineering and experiment data. The replacement of the many mission-unique data systems with the single Advanced Multimission Operations System (AMMOS) has already allowed for some manpower reduction. Now, we find that further economies are made possible by drastically reducing the number of human interventions required to perform the setup, data saving, station handover, processed data loading, and tear down activities that are associated with each spacecraft tracking pass. We have recently adapted three public domain tools to the AMMOS system which allow common elements to be scheduled and initialized without the normal human intervention. This is accomplished with a stored weekly event schedule. The manual entries and specialized scripts which had to be provided just prior to and during a pass are now triggered by the schedule to perform the functions unique to the upcoming pass. This combination of public domain software and the AMMOS system has been run in parallel with the flight operation in an online testing phase for six months. With this methodology, a savings of 11 man-years per year is projected with no increase in data loss or project risk. There are even greater savings to be gained as we learn other uses for this configuration.

  14. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  15. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  16. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  17. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  18. Evaluation of an ethidium monoazide–enhanced 16S rDNA real-time polymerase chain reaction assay for bacterial screening of platelet concentrates and comparison with automated culture

    PubMed Central

    Garson, Jeremy A; Patel, Poorvi; McDonald, Carl; Ball, Joanne; Rosenberg, Gillian; Tettmar, Kate I; Brailsford, Susan R; Pitt, Tyrone; Tedder, Richard S

    2014-01-01

    BACKGROUND Culture-based systems are currently the preferred means for bacterial screening of platelet (PLT) concentrates. Alternative bacterial detection techniques based on nucleic acid amplification have also been developed but these have yet to be fully evaluated. In this study we evaluate a novel 16S rDNA polymerase chain reaction (PCR) assay and compare its performance with automated culture. STUDY DESIGN AND METHODS A total of 2050 time-expired, 176 fresh, and 400 initial-reactive PLT packs were tested by real-time PCR using broadly reactive 16S primers and a “universal” probe (TaqMan, Invitrogen). PLTs were also tested using a microbial detection system (BacT/ALERT, bioMérieux) under aerobic and anaerobic conditions. RESULTS Seven of 2050 (0.34%) time-expired PLTs were found repeat reactive by PCR on the initial nucleic acid extract but none of these was confirmed positive on testing frozen second aliquots. BacT/ALERT testing also failed to confirm any time-expired PLTs positive on repeat testing, although 0.24% were reactive on the first test. Three of the 400 “initial-reactive” PLT packs were found by both PCR and BacT/ALERT to be contaminated (Escherichia coli, Listeria monocytogenes, and Streptococcus vestibularis identified) and 14 additional packs were confirmed positive by BacT/ALERT only. In 13 of these cases the contaminating organisms were identified as anaerobic skin or oral commensals and the remaining pack was contaminated with Streptococcus pneumoniae. CONCLUSION These results demonstrate that the 16S PCR assay is less sensitive than BacT/ALERT and inappropriate for early testing of concentrates. However, rapid PCR assays such as this may be suitable for a strategy of late or prerelease testing. PMID:23701338

  19. Evaluation of an ethidium monoazide-enhanced 16S rDNA real-time polymerase chain reaction assay for bacterial screening of platelet concentrates and comparison with automated culture.

    PubMed

    Garson, Jeremy A; Patel, Poorvi; McDonald, Carl; Ball, Joanne; Rosenberg, Gillian; Tettmar, Kate I; Brailsford, Susan R; Pitt, Tyrone; Tedder, Richard S

    2014-03-01

    Culture-based systems are currently the preferred means for bacterial screening of platelet (PLT) concentrates. Alternative bacterial detection techniques based on nucleic acid amplification have also been developed but these have yet to be fully evaluated. In this study we evaluate a novel 16S rDNA polymerase chain reaction (PCR) assay and compare its performance with automated culture. A total of 2050 time-expired, 176 fresh, and 400 initial-reactive PLT packs were tested by real-time PCR using broadly reactive 16S primers and a "universal" probe (TaqMan, Invitrogen). PLTs were also tested using a microbial detection system (BacT/ALERT, bioMérieux) under aerobic and anaerobic conditions. Seven of 2050 (0.34%) time-expired PLTs were found repeat reactive by PCR on the initial nucleic acid extract but none of these was confirmed positive on testing frozen second aliquots. BacT/ALERT testing also failed to confirm any time-expired PLTs positive on repeat testing, although 0.24% were reactive on the first test. Three of the 400 "initial-reactive" PLT packs were found by both PCR and BacT/ALERT to be contaminated (Escherichia coli, Listeria monocytogenes, and Streptococcus vestibularis identified) and 14 additional packs were confirmed positive by BacT/ALERT only. In 13 of these cases the contaminating organisms were identified as anaerobic skin or oral commensals and the remaining pack was contaminated with Streptococcus pneumoniae. These results demonstrate that the 16S PCR assay is less sensitive than BacT/ALERT and inappropriate for early testing of concentrates. However, rapid PCR assays such as this may be suitable for a strategy of late or prerelease testing. © 2013 American Association of Blood Banks.

  20. Autonomy and Automation

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  1. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  2. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  3. Generating Random Parallel Test Forms Using CTT in a Computer-Based Environment.

    ERIC Educational Resources Information Center

    Weiner, John A.; Gibson, Wade M.

    1998-01-01

    Describes a procedure for automated-test-forms assembly based on Classical Test Theory (CTT). The procedure uses stratified random-content sampling and test-form preequating to ensure both content and psychometric equivalence in generating virtually unlimited parallel forms. Extends the usefulness of CTT in automated test construction. (Author/SLD)

  4. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  5. Parallel Processing Creates a Low-Cost Growth Path.

    ERIC Educational Resources Information Center

    Shekhel, Alex; Freeman, Eva

    1987-01-01

    Discusses the advantages of parallel processor computers in terms of expandibility, cost, performance and reliability, and suggests that such computers be used in library automation systems as a cost effective approach to planning for the growth of information services and computer applications. (CLB)

  6. Low cost instrumentation: Parallel port analog to digital converter

    NASA Astrophysics Data System (ADS)

    Dierking, Matthew P.

    1993-02-01

    The personal computer (PC) has become a powerful and cost effective computing platform for use in the laboratory and industry. This Technical Memorandum presents the use of the PC parallel port adapter to implement a low cost analog to digital converter for general purpose instrumentation and automated data acquisition.

  7. Automation in Clinical Microbiology

    PubMed Central

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  8. Automation of industrial bioprocesses.

    PubMed

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems.

  9. Shoe-String Automation

    SciTech Connect

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  10. Languages for parallel architectures

    SciTech Connect

    Bakker, J.W.

    1989-01-01

    This book presents mathematical methods for modelling parallel computer architectures, based on the results of ESPRIT's project 415 on computer languages for parallel architectures. Presented are investigations incorporating a wide variety of programming styles, including functional,logic, and object-oriented paradigms. Topics cover include Philips's parallel object-oriented language POOL, lazy-functional languages, the languages IDEAL, K-LEAF, FP2, and Petri-net semantics for the AADL language.

  11. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  12. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  13. Problems of Automation and Management Principles Information Flow in Manufacturing

    NASA Astrophysics Data System (ADS)

    Grigoryuk, E. N.; Bulkin, V. V.

    2017-07-01

    Automated control systems of technological processes are complex systems that are characterized by the presence of elements of the overall focus, the systemic nature of the implemented algorithms for the exchange and processing of information, as well as a large number of functional subsystems. The article gives examples of automatic control systems and automated control systems of technological processes held parallel between them by identifying strengths and weaknesses. Other proposed non-standard control system of technological process.

  14. Single-cell bacteria growth monitoring by automated DEP-facilitated image analysis.

    PubMed

    Peitz, Ingmar; van Leeuwen, Rien

    2010-11-07

    Growth monitoring is the method of choice in many assays measuring the presence or properties of pathogens, e.g. in diagnostics and food quality. Established methods, relying on culturing large numbers of bacteria, are rather time-consuming, while in healthcare time often is crucial. Several new approaches have been published, mostly aiming at assaying growth or other properties of a small number of bacteria. However, no method so far readily achieves single-cell resolution with a convenient and easy to handle setup that offers the possibility for automation and high throughput. We demonstrate these benefits in this study by employing dielectrophoretic capturing of bacteria in microfluidic electrode structures, optical detection and automated bacteria identification and counting with image analysis algorithms. For a proof-of-principle experiment we chose an antibiotic susceptibility test with Escherichia coli and polymyxin B. Growth monitoring is demonstrated on single cells and the impact of the antibiotic on the growth rate is shown. The minimum inhibitory concentration as a standard diagnostic parameter is derived from a dose-response plot. This report is the basis for further integration of image analysis code into device control. Ultimately, an automated and parallelized setup may be created, using an optical microscanner and many of the electrode structures simultaneously. Sufficient data for a sound statistical evaluation and a confirmation of the initial findings can then be generated in a single experiment.

  15. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  16. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  17. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  18. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  19. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  20. A self-contained, programmable microfluidic cell culture system with real-time microscopy access.

    PubMed

    Skafte-Pedersen, Peder; Hemmingsen, Mette; Sabourin, David; Blaga, Felician Stefan; Bruus, Henrik; Dufva, Martin

    2012-04-01

    Utilizing microfluidics is a promising way for increasing the throughput and automation of cell biology research. We present a complete self-contained system for automated cell culture and experiments with real-time optical read-out. The system offers a high degree of user-friendliness, stability due to simple construction principles and compactness for integration with standard instruments. Furthermore, the self-contained system is highly portable enabling transfer between work stations such as laminar flow benches, incubators and microscopes. Accommodation of 24 individual inlet channels enables the system to perform parallel, programmable and multiconditional assays on a single chip. A modular approach provides system versatility and allows many different chips to be used dependent upon application. We validate the system's performance by demonstrating on-chip passive switching and mixing by peristaltically driven flows. Applicability for biological assays is demonstrated by on-chip cell culture including on-chip transfection and temporally programmable gene expression.

  1. CS-Studio Scan System Parallelization

    SciTech Connect

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  2. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  3. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  4. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  5. Parallels in History.

    ERIC Educational Resources Information Center

    Mugleston, William F.

    2000-01-01

    Believes that by focusing on the recurrent situations and problems, or parallels, throughout history, students will understand the relevance of history to their own times and lives. Provides suggestions for parallels in history that may be introduced within lectures or as a means to class discussions. (CMK)

  6. Automated DNA Sequencing System

    SciTech Connect

    Armstrong, G.A.; Ekkebus, C.P.; Hauser, L.J.; Kress, R.L.; Mural, R.J.

    1999-04-25

    Oak Ridge National Laboratory (ORNL) is developing a core DNA sequencing facility to support biological research endeavors at ORNL and to conduct basic sequencing automation research. This facility is novel because its development is based on existing standard biology laboratory equipment; thus, the development process is of interest to the many small laboratories trying to use automation to control costs and increase throughput. Before automation, biology Laboratory personnel purified DNA, completed cycle sequencing, and prepared 96-well sample plates with commercially available hardware designed specifically for each step in the process. Following purification and thermal cycling, an automated sequencing machine was used for the sequencing. A technician handled all movement of the 96-well sample plates between machines. To automate the process, ORNL is adding a CRS Robotics A- 465 arm, ABI 377 sequencing machine, automated centrifuge, automated refrigerator, and possibly an automated SpeedVac. The entire system will be integrated with one central controller that will direct each machine and the robot. The goal of this system is to completely automate the sequencing procedure from bacterial cell samples through ready-to-be-sequenced DNA and ultimately to completed sequence. The system will be flexible and will accommodate different chemistries than existing automated sequencing lines. The system will be expanded in the future to include colony picking and/or actual sequencing. This discrete event, DNA sequencing system will demonstrate that smaller sequencing labs can achieve cost-effective the laboratory grow.

  7. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  9. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  10. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  11. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  12. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  13. Automated Fresnel lens tester system

    SciTech Connect

    Phipps, G.S.

    1981-07-01

    An automated data collection system controlled by a desktop computer has been developed for testing Fresnel concentrators (lenses) intended for solar energy applications. The system maps the two-dimensional irradiance pattern (image) formed in a plane parallel to the lens, whereas the lens and detector assembly track the sun. A point detector silicon diode (0.5-mm-dia active area) measures the irradiance at each point of an operator-defined rectilinear grid of data positions. Comparison with a second detector measuring solar insolation levels results in solar concentration ratios over the image plane. Summation of image plane energies allows calculation of lens efficiencies for various solar cell sizes. Various graphical plots of concentration ratio data help to visualize energy distribution patterns.

  14. Management Planning for Workplace Automation.

    ERIC Educational Resources Information Center

    McDole, Thomas L.

    Several factors must be considered when implementing office automation. Included among these are whether or not to automate at all, the effects of automation on employees, requirements imposed by automation on the physical environment, effects of automation on the total organization, and effects on clientele. The reasons behind the success or…

  15. Laboratory Automation and Middleware.

    PubMed

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory.

  16. Complacency and Automation Bias in the Use of Imperfect Automation.

    PubMed

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  17. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  18. Automation in the clinical microbiology laboratory.

    PubMed

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future.

  19. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  20. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  1. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  2. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  3. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  4. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  5. Revisiting and parallelizing SHAKE

    NASA Astrophysics Data System (ADS)

    Weinbach, Yael; Elber, Ron

    2005-10-01

    An algorithm is presented for running SHAKE in parallel. SHAKE is a widely used approach to compute molecular dynamics trajectories with constraints. An essential step in SHAKE is the solution of a sparse linear problem of the type Ax = b, where x is a vector of unknowns. Conjugate gradient minimization (that can be done in parallel) replaces the widely used iteration process that is inherently serial. Numerical examples present good load balancing and are limited only by communication time.

  6. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  7. Automated manufacturing of chimeric antigen receptor T cells for adoptive immunotherapy using CliniMACS prodigy.

    PubMed

    Mock, Ulrike; Nickolay, Lauren; Philip, Brian; Cheung, Gordon Weng-Kit; Zhan, Hong; Johnston, Ian C D; Kaiser, Andrew D; Peggs, Karl; Pule, Martin; Thrasher, Adrian J; Qasim, Waseem

    2016-08-01

    Novel cell therapies derived from human T lymphocytes are exhibiting enormous potential in early-phase clinical trials in patients with hematologic malignancies. Ex vivo modification of T cells is currently limited to a small number of centers with the required infrastructure and expertise. The process requires isolation, activation, transduction, expansion and cryopreservation steps. To simplify procedures and widen applicability for clinical therapies, automation of these procedures is being developed. The CliniMACS Prodigy (Miltenyi Biotec) has recently been adapted for lentiviral transduction of T cells and here we analyse the feasibility of a clinically compliant T-cell engineering process for the manufacture of T cells encoding chimeric antigen receptors (CAR) for CD19 (CAR19), a widely targeted antigen in B-cell malignancies. Using a closed, single-use tubing set we processed mononuclear cells from fresh or frozen leukapheresis harvests collected from healthy volunteer donors. Cells were phenotyped and subjected to automated processing and activation using TransAct, a polymeric nanomatrix activation reagent incorporating CD3/CD28-specific antibodies. Cells were then transduced and expanded in the CentriCult-Unit of the tubing set, under stabilized culture conditions with automated feeding and media exchange. The process was continuously monitored to determine kinetics of expansion, transduction efficiency and phenotype of the engineered cells in comparison with small-scale transductions run in parallel. We found that transduction efficiencies, phenotype and function of CAR19 T cells were comparable with existing procedures and overall T-cell yields sufficient for anticipated therapeutic dosing. The automation of closed-system T-cell engineering should improve dissemination of emerging immunotherapies and greatly widen applicability. Copyright © 2016. Published by Elsevier Inc.

  8. Inventory management and reagent supply for automated chemistry.

    PubMed

    Kuzniar, E

    1999-08-01

    Developments in automated chemistry have kept pace with developments in HTS such that hundreds of thousands of new compounds can be rapidly synthesized in the belief that the greater the number and diversity of compounds that can be screened, the more successful HTS will be. The increasing use of automation for Multiple Parallel Synthesis (MPS) and the move to automated combinatorial library production is placing an overwhelming burden on the management of reagents. Although automation has improved the efficiency of the processes involved in compound synthesis, the bottleneck has shifted to ordering, collating and preparing reagents for automated chemistry resulting in loss of time, materials and momentum. Major efficiencies have already been made in the area of compound management for high throughput screening. Most of these efficiencies have been achieved with sophisticated library management systems using advanced engineering and data handling for the storage, tracking and retrieval of millions of compounds. The Automation Partnership has already provided many of the top pharmaceutical companies with modular automated storage, preparation and retrieval systems to manage compound libraries for high throughput screening. This article describes how these systems may be implemented to solve the specific problems of inventory management and reagent supply for automated chemistry.

  9. Automating checks of plan check automation.

    PubMed

    Halabi, Tarek; Lu, Hsiao-Ming

    2014-07-08

    While a few physicists have designed new plan check automation solutions for their clinics, fewer, if any, managed to adapt existing solutions. As complex and varied as the systems they check, these programs must gain the full confidence of those who would run them on countless patient plans. The present automation effort, planCheck, therefore focuses on versatility and ease of implementation and verification. To demonstrate this, we apply planCheck to proton gantry, stereotactic proton gantry, stereotactic proton fixed beam (STAR), and IMRT treatments.

  10. Automation and Cataloging.

    ERIC Educational Resources Information Center

    Furuta, Kenneth; And Others

    1990-01-01

    These three articles address issues in library cataloging that are affected by automation: (1) the impact of automation and bibliographic utilities on professional catalogers; (2) the effect of the LASS microcomputer software on the cost of authority work in cataloging at the University of Arizona; and (3) online subject heading and classification…

  11. Order Division Automated System.

    ERIC Educational Resources Information Center

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  12. More Benefits of Automation.

    ERIC Educational Resources Information Center

    Getz, Malcolm

    1988-01-01

    Describes a study that measured the benefits of an automated catalog and automated circulation system from the library user's point of view in terms of the value of time saved. Topics discussed include patterns of use, access time, availability of information, search behaviors, and the effectiveness of the measures used. (seven references)…

  13. Work and Programmable Automation.

    ERIC Educational Resources Information Center

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  14. The Automated Office.

    ERIC Educational Resources Information Center

    Naclerio, Nick

    1979-01-01

    Clerical personnel may be able to climb career ladders as a result of office automation and expanded job opportunities in the word processing area. Suggests opportunities in an automated office system and lists books and periodicals on word processing for counselors and teachers. (MF)

  15. Planning for Office Automation.

    ERIC Educational Resources Information Center

    Sherron, Gene T.

    1982-01-01

    The steps taken toward office automation by the University of Maryland are described. Office automation is defined and some types of word processing systems are described. Policies developed in the writing of a campus plan are listed, followed by a section on procedures adopted to implement the plan. (Author/MLW)

  16. WANTED: Fully Automated Indexing.

    ERIC Educational Resources Information Center

    Purcell, Royal

    1991-01-01

    Discussion of indexing focuses on the possibilities of fully automated indexing. Topics discussed include controlled indexing languages such as subject heading lists and thesauri, free indexing languages, natural indexing languages, computer-aided indexing, expert systems, and the need for greater creativity to further advance automated indexing.…

  17. Advances in inspection automation

    NASA Astrophysics Data System (ADS)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  18. Automation in Immunohematology

    PubMed Central

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-01-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process. PMID:22988378

  19. Expansion of activated lymphocytes obtained from renal cell carcinoma in an automated hollow fiber bioreactor.

    PubMed

    Hillman, G G; Wolf, M L; Montecillo, E; Younes, E; Ali, E; Pontes, J E; Haas, G P

    1994-01-01

    Immunotherapy using IL-2 alone or combined with activated lymphocytes has been promising for metastatic renal cell carcinoma. Cytotoxic lymphocytes can be isolated from tumors, expanded in vitro with IL-2, and adoptively transferred back into the tumor-bearing host. These cells can also be transduced with the genes coding for cytokines for local delivery to tumor sites. A major drawback in adoptive immunotherapy is the cumbersome and expensive culture technology associated with the growth of large numbers of cells required for their therapeutic effect. To reduce the cost, resources, and manpower, we have developed the methodology for lymphocyte activation and expansion in the automated hollow fiber bioreactor IMMUNO*STAR Cell Expander (ACT BIOMEDICAL, INC). Tumor Infiltrating Lymphocytes (TIL) isolated from human renal cell carcinoma tumor specimens were inoculated at a number of 10(8) cells in a small bioreactor of 30 ml extracapillary space volume. We have determined the medium flow rates and culture conditions to obtain a significant and repeated expansion of TIL at weekly intervals. The lymphocytes cultured in the bioreactor demonstrated the same phenotype and cytotoxic activity as those expanded in parallel in tissue culture plates. Lymphocyte expansion in the hollow fiber bioreactor required lower volumes of medium, human serum, IL-2 and minimal labor. This technology may facilitate the use of adoptive immunotherapy for the treatment of refractory malignancies.

  20. Automation of Hubble Space Telescope Mission Operations

    NASA Technical Reports Server (NTRS)

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry

    2012-01-01

    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  1. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  2. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  3. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  4. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  5. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  6. Collisionless parallel shocks

    SciTech Connect

    Khabibrakhmanov, I.K. ); Galeev, A.A.; Galinsky, V.L. )

    1993-02-01

    A collisionless parallel shock model is presented which is based on solitary-type solutions of the modified derivative nonlinear Schrodinger equation (MDNLS) for parallel Alfven waves. We generalize the standard derivative nonlinear Schrodinger equation in order to include the possible anisotropy of the plasma distribution function and higher-order Korteweg-de Vies type dispersion. Stationary solutions of MDNLS are discussed. The new mechanism, which can be called [open quote]adiabatic[close quote] of ion reflection from the magnetic mirror of the parallel shock structure is the natural and essential feature of the parallel shock that introduces the irreversible properties into the nonlinear wave structure and may significantly contribute to the plasma heating upstream as well as downstream of the shock. The anisotropic nature of [open quotes]adiabatic[close quotes] reflections leads to the asymmetric particle distribution in the upstream as well in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves which can significantly contribute to the shock thermalization. The number of adiabaticaly reflected ions define the threshold conditions of the fire-hose and mirror type instabilities in the downstream and upstream regions and thus determine a parameter region in which the described laminar parallel shock structure can exist. The threshold conditions for the fire hose and mirror-type instabilities in the downstream and upstream regions of the shock are defined by the number of reflected particles and thus determine a parameter region in which the described laminar parallel shock structure can exist. 29 refs., 4 figs.

  7. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  8. Ion parallel closures

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Lee, Hankyu Q.; Held, Eric D.

    2017-02-01

    Ion parallel closures are obtained for arbitrary atomic weights and charge numbers. For arbitrary collisionality, the heat flow and viscosity are expressed as kernel-weighted integrals of the temperature and flow-velocity gradients. Simple, fitted kernel functions are obtained from the 1600 parallel moment solution and the asymptotic behavior in the collisionless limit. The fitted kernel parameters are tabulated for various temperature ratios of ions to electrons. The closures can be used conveniently without solving the kinetic equation or higher order moment equations in closing ion fluid equations.

  9. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  10. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  11. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  12. CRUNCH_PARALLEL

    SciTech Connect

    Shumaker, Dana E.; Steefel, Carl I.

    2016-06-21

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  13. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  14. A centralized global automation group in a decentralized organization.

    PubMed

    Ormand, J; Bruner, J; Birkemo, L; Hinderliter-Smith, J; Veitch, J

    2000-01-01

    In the latter part of the 1990s, many companies have worked to foster a 'matrix' style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized resource providing development, support, integration, and training in laboratory automation across businesses in the Development organization. The matrix culture still presents challenges with respect to communication and managing the development of technology. A current challenge for our team is to go beyond our recognized role as a technology resource and actually to influence automation strategies across the global Development organization. We shall provide an overview of our role as a centralized resource, our team strategy, examples of current and past successes and failures, and future directions.

  15. A centralized global automation group in a decentralized organization

    PubMed Central

    Ormand, James; Bruner, Jimmy; Birkemo, Larry; Hinderliter-Smith, Judy; Veitch, Jeffrey

    2000-01-01

    In the latter part of the 1990s, many companies have worked to foster a ‘matrix’ style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized resource providing development, support, integration, and training in laboratory automation across businesses in the Development organization. The matrix culture still presents challenges with respect to communication and managing the development of technology. A current challenge for our team is to go beyond our recognized role as a technology resource and actually to influence automation strategies across the global Development organization. We shall provide an overview of our role as a centralized resource, our team strategy, examples of current and past successes and failures, and future directions. PMID:18924694

  16. Automation synthesis modules review.

    PubMed

    Boschi, S; Lodi, F; Malizia, C; Cicoria, G; Marengo, M

    2013-06-01

    The introduction of (68)Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived (68)Ge/(68)Ga generator has been at the bases of the development of (68)Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for (68)Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Managing laboratory automation

    PubMed Central

    Saboe, Thomas J.

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed. PMID:18925018

  18. Automated Hydroforming of Seamless Superconducting RF Cavity

    SciTech Connect

    Nagata, Tomohiko; Shinozawa, Seiichi; Abe, Noriyuki; Nagakubo, Junki; Murakami, Hirohiko; Tajima, Tsuyoshi; Inoue, Hitoshi; Yamanaka, Masashi; Ueno, Kenji

    2012-07-31

    We are studying the possibility of automated hydroforming process for seamless superconducting RF cavities. Preliminary hydroforming tests of three-cell cavities from seamless tubes made of C1020 copper have been performed. The key point of an automated forming is to monitor and strictly control some parameters such as operation time, internal pressure and material displacements. Especially, it is necessary for our studies to be able to control axial and radial deformation independently. We plan to perform the forming in two stages to increase the reliability of successful forming. In the first stage hydroforming by using intermediate constraint dies, three-cell cavities were successfully formed in less than 1 minute. In parallel, we did elongation tests on cavity-quality niobium and confirmed that it is possible to achieve an elongation of >64% in 2 stages that is required for our forming of 1.3 GHz cavities.

  19. Automated Generation of Message-Passing Programs: An Evaluation of CAPTools using NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Hao-Qiang; Yan, Jerry C.; Bailey, David (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During the same time period, a steady transition of hardware and system software also occurred, forcing us to expand great efforts into migrating and receding our applications. As applications and machine architectures continue to become increasingly complex, the cost and time required for this process will become prohibitive. Various attempts to exploit software tools to assist and automate the parallelization process have not produced favorable results. In this paper, we evaluate an interactive parallelization tool, CAPTools, for parallelizing serial versions of the NAB Parallel Benchmarks. Finally, we compare the performance of the resulting CAPTools generated code to the hand-coded benchmarks on the Origin 2000 and IBM SP2. Based on these results, a discussion on the feasibility of automated parallelization of aerospace applications is presented along with suggestions for future work.

  20. A nanoliter-scale nucleic acid processor with parallel architecture.

    PubMed

    Hong, Jong Wook; Studer, Vincent; Hang, Giao; Anderson, W French; Quake, Stephen R

    2004-04-01

    The purification of nucleic acids from microbial and mammalian cells is a crucial step in many biological and medical applications. We have developed microfluidic chips for automated nucleic acid purification from small numbers of bacterial or mammalian cells. All processes, such as cell isolation, cell lysis, DNA or mRNA purification, and recovery, were carried out on a single microfluidic chip in nanoliter volumes without any pre- or postsample treatment. Measurable amounts of mRNA were extracted in an automated fashion from as little as a single mammalian cell and recovered from the chip. These microfluidic chips are capable of processing different samples in parallel, thereby illustrating how highly parallel microfluidic architectures can be constructed to perform integrated batch-processing functionalities for biological and medical applications.

  1. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  2. Parallel Total Energy

    SciTech Connect

    Wang, Lin-Wang

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  3. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, Michael

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  4. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  5. [The parallel saw blade].

    PubMed

    Mühldorfer-Fodor, M; Hohendorff, B; Prommersberger, K-J; van Schoonhoven, J

    2011-04-01

    For shortening osteotomy, two exactly parallel osteotomies are needed to assure a congruent adaption of the shortened bone after segment resection. This is required for regular bone healing. In addition, it is difficult to shorten a bone to a precise distance using an oblique segment resection. A mobile spacer between two saw blades keeps the distance of the blades exactly parallel during an osteotomy cut. The parallel saw blades from Synthes® are designed for 2, 2.5, 3, 4, and 5 mm shortening distances. Two types of blades are available (e.g., for transverse or oblique osteotomies) to assure precise shortening. Preoperatively, the desired type of osteotomy (transverse or oblique) and the shortening distance has to be determined. Then, the appropriate parallel saw blade is chosen, which is compatible to Synthes® Colibri with an oscillating saw attachment. During the osteotomy cut, the spacer should be kept as close to the bone as possible. Excessive force that may deform the blades should be avoided. Before manipulating the bone ends, it is important to determine that the bone is completely dissected by both saw blades to prevent fracturing of the corticalis with bony spurs. The shortening osteotomy is mainly fixated by plate osteosynthesis. For compression of the bone ends, the screws should be placed eccentrically in the plate holes. For an oblique osteotomy, an additional lag screw should be used.

  6. Parallel Coordinate Axes.

    ERIC Educational Resources Information Center

    Friedlander, Alex; And Others

    1982-01-01

    Several methods of numerical mappings other than the usual cartesian coordinate system are considered. Some examples using parallel axes representation, which are seen to lead to aesthetically pleasing or interesting configurations, are presented. Exercises with alternative representations can stimulate pupil imagination and exploration in…

  7. Parallel Dislocation Simulator

    SciTech Connect

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  8. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  9. Progress in parallelizing XOOPIC

    NASA Astrophysics Data System (ADS)

    Mardahl, Peter; Verboncoeur, J. P.

    1997-11-01

    XOOPIC (Object Orient Particle in Cell code for X11-based Unix workstations) is presently a serial 2-D 3v particle-in-cell plasma simulation (J.P. Verboncoeur, A.B. Langdon, and N.T. Gladd, ``An object-oriented electromagnetic PIC code.'' Computer Physics Communications 87 (1995) 199-211.). The present effort focuses on using parallel and distributed processing to optimize the simulation for large problems. The benefits include increased capacity for memory intensive problems, and improved performance for processor-intensive problems. The MPI library is used to enable the parallel version to be easily ported to massively parallel, SMP, and distributed computers. The philosophy employed here is to spatially decompose the system into computational regions separated by 'virtual boundaries', objects which contain the local data and algorithms to perform the local field solve and particle communication between regions. This implementation will reduce the changes required in the rest of the program by parallelization. Specific implementation details such as the hiding of communication latency behind local computation will also be discussed.

  10. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Quinn O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  11. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  12. Parallel Multigrid Equation Solver

    SciTech Connect

    Adams, Mark

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  13. Xenon International Automated Control

    SciTech Connect

    2016-08-05

    The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.

  14. Automating the Media Center.

    ERIC Educational Resources Information Center

    Holloway, Mary A.

    1988-01-01

    Discusses the need to develop more efficient information retrieval skills by the use of new technology. Lists four stages used in automating the media center. Describes North Carolina's pilot programs. Proposes benefits and looks at the media center's future. (MVL)

  15. Planning for Office Automation.

    ERIC Educational Resources Information Center

    Mick, Colin K.

    1983-01-01

    Outlines a practical approach to planning for office automation termed the "Focused Process Approach" (the "what" phase, "how" phase, "doing" phase) which is a synthesis of the problem-solving and participatory planning approaches. Thirteen references are provided. (EJS)

  16. Automated decision stations

    NASA Technical Reports Server (NTRS)

    Tischendorf, Mark

    1990-01-01

    This paper discusses the combination of software robots and expert systems to automate everyday business tasks. Tasks which require people to repetitively interact with multiple systems screens as well as multiple systems.

  17. Automated Status Notification System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA Lewis Research Center's Automated Status Notification System (ASNS) was born out of need. To prevent "hacker attacks," Lewis' telephone system needed to monitor communications activities 24 hr a day, 7 days a week. With decreasing staff resources, this continuous monitoring had to be automated. By utilizing existing communications hardware, a UNIX workstation, and NAWK (a pattern scanning and processing language), we implemented a continuous monitoring system.

  18. Automated Microfluidics for Genomics

    DTIC Science & Technology

    2001-10-25

    Abstract--The Genomation Laboratory at the University of Washington is developing an automated fluid handling system called " Acapella " to prepare...Photonic Systems, Inc. (Redmond, WA), an automated submicroliter fluid sample preparation system called ACAPELLA is being developed. Reactions such...technology include minimal residual disease quantification and sample preparation for DNA. Preliminary work on the ACAPELLA is presented in [4][5]. This

  19. Automated Pilot Advisory System

    NASA Technical Reports Server (NTRS)

    Parks, J. L., Jr.; Haidt, J. G.

    1981-01-01

    An Automated Pilot Advisory System (APAS) was developed and operationally tested to demonstrate the concept that low cost automated systems can provide air traffic and aviation weather advisory information at high density uncontrolled airports. The system was designed to enhance the see and be seen rule of flight, and pilots who used the system preferred it over the self announcement system presently used at uncontrolled airports.

  20. Automating Index Preparation

    DTIC Science & Technology

    1987-03-23

    Automating Index Preparation* Pehong Chent Michael A. Harrison Computer Science Division University of CaliforniaI Berkeley, CA 94720 March 23, 1987...Abstract Index preparation is a tedious and time-consuming task. In this paper we indicate * how the indexing process can be automated in a way which...identified and analyzed. Specifically, we describe a framework for placing index commands in the document and a general purpose index processor which

  1. Automated Lattice Perturbation Theory

    SciTech Connect

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  2. Automated macromolecular crystal detection system and method

    DOEpatents

    Christian, Allen T.; Segelke, Brent; Rupp, Bernard; Toppani, Dominique

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  3. Automated inspection of hot steel slabs

    DOEpatents

    Martin, R.J.

    1985-12-24

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.

  4. Automated inspection of hot steel slabs

    DOEpatents

    Martin, Ronald J.

    1985-01-01

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.

  5. Metrology automation reliability

    NASA Astrophysics Data System (ADS)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  6. Automated Groundwater Screening

    SciTech Connect

    Taylor, Glenn A.; Collard, Leonard, B.

    2005-10-31

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application.

  7. Elements of EAF automation processes

    NASA Astrophysics Data System (ADS)

    Ioana, A.; Constantin, N.; Dragna, E. C.

    2017-01-01

    Our article presents elements of Electric Arc Furnace (EAF) automation. So, we present and analyze detailed two automation schemes: the scheme of electrical EAF automation system; the scheme of thermic EAF automation system. The application results of these scheme of automation consists in: the sensitive reduction of specific consummation of electrical energy of Electric Arc Furnace, increasing the productivity of Electric Arc Furnace, increase the quality of the developed steel, increasing the durability of the building elements of Electric Arc Furnace.

  8. Yeast-based automated high-throughput screens to identify anti-parasitic lead compounds

    PubMed Central

    Bilsland, Elizabeth; Sparkes, Andrew; Williams, Kevin; Moss, Harry J.; de Clare, Michaela; Pir, Pınar; Rowland, Jem; Aubrey, Wayne; Pateman, Ron; Young, Mike; Carrington, Mark; King, Ross D.; Oliver, Stephen G.

    2013-01-01

    We have developed a robust, fully automated anti-parasitic drug-screening method that selects compounds specifically targeting parasite enzymes and not their host counterparts, thus allowing the early elimination of compounds with potential side effects. Our yeast system permits multiple parasite targets to be assayed in parallel owing to the strains’ expression of different fluorescent proteins. A strain expressing the human target is included in the multiplexed screen to exclude compounds that do not discriminate between host and parasite enzymes. This form of assay has the advantages of using known targets and not requiring the in vitro culture of parasites. We performed automated screens for inhibitors of parasite dihydrofolate reductases, N-myristoyltransferases and phosphoglycerate kinases, finding specific inhibitors of parasite targets. We found that our ‘hits’ have significant structural similarities to compounds with in vitro anti-parasitic activity, validating our screens and suggesting targets for hits identified in parasite-based assays. Finally, we demonstrate a 60 per cent success rate for our hit compounds in killing or severely inhibiting the growth of Trypanosoma brucei, the causative agent of African sleeping sickness. PMID:23446112

  9. Yeast-based automated high-throughput screens to identify anti-parasitic lead compounds.

    PubMed

    Bilsland, Elizabeth; Sparkes, Andrew; Williams, Kevin; Moss, Harry J; de Clare, Michaela; Pir, Pinar; Rowland, Jem; Aubrey, Wayne; Pateman, Ron; Young, Mike; Carrington, Mark; King, Ross D; Oliver, Stephen G

    2013-02-27

    We have developed a robust, fully automated anti-parasitic drug-screening method that selects compounds specifically targeting parasite enzymes and not their host counterparts, thus allowing the early elimination of compounds with potential side effects. Our yeast system permits multiple parasite targets to be assayed in parallel owing to the strains' expression of different fluorescent proteins. A strain expressing the human target is included in the multiplexed screen to exclude compounds that do not discriminate between host and parasite enzymes. This form of assay has the advantages of using known targets and not requiring the in vitro culture of parasites. We performed automated screens for inhibitors of parasite dihydrofolate reductases, N-myristoyltransferases and phosphoglycerate kinases, finding specific inhibitors of parasite targets. We found that our 'hits' have significant structural similarities to compounds with in vitro anti-parasitic activity, validating our screens and suggesting targets for hits identified in parasite-based assays. Finally, we demonstrate a 60 per cent success rate for our hit compounds in killing or severely inhibiting the growth of Trypanosoma brucei, the causative agent of African sleeping sickness.

  10. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  11. Parallel processing of remotely sensed data: Application to the ATSR-2 instrument

    NASA Astrophysics Data System (ADS)

    Simpson, J.; McIntire, T.; Berg, J.; Tsou, Y.

    2007-01-01

    Massively parallel computational paradigms can mitigate many issues associated with the analysis of large and complex remotely sensed data sets. Recently, the Beowulf cluster has emerged as the most attractive, massively parallel architecture due to its low cost and high performance. Whereas most Beowulf designs have emphasized numerical modeling applications, the Parallel Image Processing Environment (PIPE) specifically addresses the unique requirements of remote sensing applications. Automated, parallelization of user-defined analyses is fully supported. A neural network application, applied to Along Track Scanning Radiometer-2 (ATSR-2) data shows the advantages and performance characteristics of PIPE.

  12. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  13. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  14. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  15. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  16. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  17. Homology, convergence and parallelism

    PubMed Central

    Ghiselin, Michael T.

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  18. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  19. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  20. Parallel Computing in Optimization.

    DTIC Science & Technology

    1984-10-01

    include : Heller [1978] and Sameh [1977] (surveys of algorithms), Duff [1983], Fong and Jordan [1977]. Jordan [1979]. and Rodrigue [1982] (all mainly...constrained concave function by partition of feasible domain", Mathematics of Operations Research 8, pp. A. Sameh [1977, "Numerical parallel algorithms...a survey", in High Speed Computer and Algorithm Organization, D. Kuck, D. Lawrie, and A. Sameh , eds., Academic Press, pp. 207-228. 1,. J. Siegel

  1. Development of Parallel GSSHA

    DTIC Science & Technology

    2013-09-01

    C en te r Paul R. Eller , Jing-Ru C. Cheng, Aaron R. Byrd, Charles W. Downer, and Nawa Pradhan September 2013 Approved for public release...Program ERDC TR-13-8 September 2013 Development of Parallel GSSHA Paul R. Eller and Jing-Ru C. Cheng Information Technology Laboratory US Army Engineer...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Eller , Ruth Cheng, Aaron Byrd, Chuck Downer, and Nawa Pradhan 5d. PROJECT NUMBER

  2. Parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Camberos, Jose; Merriam, Marshal

    1991-01-01

    A parallel unstructured grid generation algorithm is presented and implemented on the Hypercube. Different processor hierarchies are discussed, and the appropraite hierarchies for mesh generation and mesh smoothing are selected. A domain-splitting algorithm for unstructured grids which tries to minimize the surface-to-volume ratio of each subdomain is described. This splitting algorithm is employed both for grid generation and grid smoothing. Results obtained on the Hypercube demonstrate the effectiveness of the algorithms developed.

  3. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  4. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  5. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  6. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Imamura, M. S.; Moser, R. L.; Veatch, M.

    1983-01-01

    Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.

  7. Automated telescope scheduling

    NASA Astrophysics Data System (ADS)

    Johnston, Mark D.

    1988-08-01

    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  8. Automated telescope scheduling

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.

    1988-01-01

    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  9. Fully automated protein purification

    PubMed Central

    Camper, DeMarco V.; Viola, Ronald E.

    2009-01-01

    Obtaining highly purified proteins is essential to begin investigating their functional and structural properties. The steps that are typically involved in purifying proteins can include an initial capture, intermediate purification, and a final polishing step. Completing these steps can take several days and require frequent attention to ensure success. Our goal was to design automated protocols that will allow the purification of proteins with minimal operator intervention. Separate methods have been produced and tested that automate the sample loading, column washing, sample elution and peak collection steps for ion-exchange, metal affinity, hydrophobic interaction and gel filtration chromatography. These individual methods are designed to be coupled and run sequentially in any order to achieve a flexible and fully automated protein purification protocol. PMID:19595984

  10. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  11. Status of TRANSP Parallel Services

    NASA Astrophysics Data System (ADS)

    Indireshkumar, K.; Andre, Robert; McCune, Douglas; Randerson, Lewis

    2006-10-01

    The PPPL TRANSP code suite has been used successfully over many years to carry out time dependent simulations of tokamak plasmas. However, accurately modeling certain phenomena such as RF heating and fast ion behavior using TRANSP requires extensive computational power and will benefit from parallelization. Parallelizing all of TRANSP is not required and parts will run sequentially while other parts run parallelized. To efficiently use a site's parallel services, the parallelized TRANSP modules are deployed to a shared ``parallel service'' on a separate cluster. The PPPL Monte Carlo fast ion module NUBEAM and the MIT RF module TORIC are the first TRANSP modules to be so deployed. This poster will show the performance scaling of these modules within the parallel server. Communications between the serial client and the parallel server will be described in detail, and measurements of startup and communications overhead will be shown. Physics modeling benefits for TRANSP users will be assessed.

  12. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  13. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  14. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  15. The Structure of Parallel Algorithms.

    DTIC Science & Technology

    1979-08-01

    parallel architectures and parallel algorithms see [Anderson and Jensen 75, Stone 75, Kung 76, Enslow 77, Kuck 77, Ramamoorthy and Li 77, Sameh 77, Heller...the Routing Time on a Parallel Computer with a Fixed Interconnection Network, In Kuck., D. J., Lawrie, D.H. and Sameh , A.H., editor, High Speed...Letters 5(4):107-112, October 1976. [ Sameh 77] Sameh , A.H. Numerical Parallel Algorithms -- A Survey. In Hifh Speed Computer and AlgorLthm Organization

  16. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  17. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  18. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  19. Automated gas chromatography

    DOEpatents

    Mowry, Curtis D.; Blair, Dianna S.; Rodacy, Philip J.; Reber, Stephen D.

    1999-01-01

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute.

  20. Automation in optics manufacturing

    NASA Astrophysics Data System (ADS)

    Pollicove, Harvey M.; Moore, Duncan T.

    1991-01-01

    The optics industry has not followed the lead of the machining and electronics industries in applying advances in computer aided engineering (CAE), computer assisted manufacturing (CAM), automation or quality management techniques. Automation based on computer integrated manufacturing (CIM) and flexible machining systems (FMS) has been widely implemented in these industries. Optics continues to rely on standalone equipment that preserves the highly skilled, labor intensive optical fabrication systems developed in the 1940's. This paper describes development initiatives at the Center for Optics Manufacturing that will create computer integrated manufacturing technology and support processes for the optical industry.

  1. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Engineering software development was automated using an expert system (rule-based) approach. The use of this technology offers benefits not available from current software development and maintenance methodologies. A workstation was built with a library or program data base with methods for browsing the designs stored; a system for graphical specification of designs including a capability for hierarchical refinement and definition in a graphical design system; and an automated code generation capability in FORTRAN. The workstation was then used in a demonstration with examples from an attitude control subsystem design for the space station. Documentation and recommendations are presented.

  2. Automating the CMS DAQ

    SciTech Connect

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  3. Automated Library System Specifications.

    DTIC Science & Technology

    1986-06-01

    AD-A78 95 AUTOMATED LIBRARY SYSTEM SPECIFICATIONS(U) ARMY LIBRARY /i MANAGEMENT OFFICE ALEXANDRIA VA ASSISTANT CHIEF OF STAFF FOR INFORMATION... MANAGEMENT M B BONNETT JUN 86 UNCLASSIFIED F/G 9/2 NLEElIIhllEEEEE IllEEEEEllllEI .1lm lliml * ~I fI.L25 MI, [OCM RL,;OCLUTO fl. ’N k~ AUTOMATED LIBRARY...SYSTEM SPECIFICATIONS .,I Prepared by Mary B. Bonnett ARMY LIBRARY MANAGEMENT OFFICE OFFICE OF THE ASSISTANT CHIEF OF STAFF FOR INFORMATION MANAGEMENT Lij

  4. Automated HMC assembly

    SciTech Connect

    Blazek, R.J.

    1993-08-01

    An automated gold wire bonder was characterized for bonding 1-mil gold wires to gallium-arsenide (GaAs) monolithic microwave integrated circuits (MMICs) which are used in microwave radar transmitter-receiver (T/R) modules. Acceptable gold wire bond test results were obtained for the fragile, 5-mil-thick GaAs MMICs with gold-metallized bond pads; and average wire bond pull strengths, shear strengths, and failure modes were determined. An automated aluminum wire bonder was modified to be used as a gold wire bonder so that a wedge bond capability was available for GaAs MMICs in addition to the gold ball bond capability.

  5. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  6. Massively Parallel Genetics.

    PubMed

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  7. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  8. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  9. Altering user' acceptance of automation through prior automation exposure.

    PubMed

    Bekier, Marek; Molesworth, Brett R C

    2016-08-22

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  10. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  11. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  12. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  13. Human Factors In Aircraft Automation

    NASA Technical Reports Server (NTRS)

    Billings, Charles

    1995-01-01

    Report presents survey of state of art in human factors in automation of aircraft operation. Presents examination of aircraft automation and effects on flight crews in relation to human error and aircraft accidents.

  14. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  15. Benchmarking massively parallel architectures

    SciTech Connect

    Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

    1993-01-01

    The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

  16. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  17. Parallel Eigenvalue extraction

    NASA Technical Reports Server (NTRS)

    Akl, Fred A.

    1989-01-01

    A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is utilized in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. Assembly, elimination and back-substitution of degrees of freedom are performed concurrently, using a number of fronts. All fronts converge to and diverge from a predefined global front during elimination and back-substitution, respectively. In the meantime, reduction of the stiffness and mass matrices required by the modified subspace method can be completed during the convergence/divergence cycle and an estimate of the required eigenpairs obtained. Successive cycles of convergence and divergence are repeated until the desired accuracy of calculations is achieved. The advantages of this new algorithm in parallel computer architecture are discussed.

  18. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  19. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  20. Parallel ptychographic reconstruction

    SciTech Connect

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.

  1. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  2. Workload Capacity: A Response Time-Based Measure of Automation Dependence.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2016-05-01

    An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.

  3. Development and evaluation of systems for controlling parallel high di/dt thyratrons

    SciTech Connect

    Litton. A.; McDuff, G.

    1982-01-01

    Increasing numbers of high power, high repetition rate applications dictate the use or thyratrons in multiple of hard parallel configurations to achieve the required rate of current rise, di/dt. This in turn demands the development of systems to control parallel thyratron commutation with nanosecond accuracy. Such systems must be capable of real-time, fully-automated control in multi-kilohertz applications while still remaining cost effective. This paper describes the evolution of such a control methodology and system.

  4. Embryoid Body-Explant Outgrowth Cultivation from Induced Pluripotent Stem Cells in an Automated Closed Platform

    PubMed Central

    Tone, Hiroshi; Yoshioka, Saeko; Akiyama, Hirokazu; Nishimura, Akira; Ichimura, Masaki; Nakatani, Masaru; Kiyono, Tohru

    2016-01-01

    Automation of cell culture would facilitate stable cell expansion with consistent quality. In the present study, feasibility of an automated closed-cell culture system “P 4C S” for an embryoid body- (EB-) explant outgrowth culture was investigated as a model case for explant culture. After placing the induced pluripotent stem cell- (iPSC-) derived EBs into the system, the EBs successfully adhered to the culture surface and the cell outgrowth was clearly observed surrounding the adherent EBs. After confirming the outgrowth, we carried out subculture manipulation, in which the detached cells were simply dispersed by shaking the culture flask, leading to uniform cell distribution. This enabled continuous stable cell expansion, resulting in a cell yield of 3.1 × 107. There was no evidence of bacterial contamination throughout the cell culture experiments. We herewith developed the automated cultivation platform for EB-explant outgrowth cells. PMID:27648449

  5. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  6. Guide to Library Automation.

    ERIC Educational Resources Information Center

    Toohill, Barbara G.

    Directed toward librarians and library administrators who wish to procure automated systems or services for their libraries, this guide offers practical suggestions, advice, and methods for determining requirements, estimating costs and benefits, writing specifications procuring systems, negotiating contracts, and installing systems. The advice…

  7. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  8. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  9. Building Automation Systems.

    ERIC Educational Resources Information Center

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  10. Automated Estimating System (AES)

    SciTech Connect

    Holder, D.A.

    1989-09-01

    This document describes Version 3.1 of the Automated Estimating System, a personal computer-based software package designed to aid in the creation, updating, and reporting of project cost estimates for the Estimating and Scheduling Department of the Martin Marietta Energy Systems Engineering Division. Version 3.1 of the Automated Estimating System is capable of running in a multiuser environment across a token ring network. The token ring network makes possible services and applications that will more fully integrate all aspects of information processing, provides a central area for large data bases to reside, and allows access to the data base by multiple users. Version 3.1 of the Automated Estimating System also has been enhanced to include an Assembly pricing data base that may be used to retrieve cost data into an estimate. A WBS Title File program has also been included in Version 3.1. The WBS Title File program allows for the creation of a WBS title file that has been integrated with the Automated Estimating System to provide WBS titles in update mode and in reports. This provides for consistency in WBS titles and provides the capability to display WBS titles on reports generated at a higher WBS level.

  11. Automated Essay Scoring

    ERIC Educational Resources Information Center

    Dikli, Semire

    2006-01-01

    The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…

  12. Automated Microbial Genome Annotation

    SciTech Connect

    Land, Miriam

    2009-05-29

    Miriam Land of the DOE Joint Genome Institute at Oak Ridge National Laboratory gives a talk on the current state and future challenges of moving toward automated microbial genome annotation at the "Sequencing, Finishing, Analysis in the Future" meeting in Santa Fe, NM

  13. Building Automation Systems.

    ERIC Educational Resources Information Center

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  14. Automated Tendering and Purchasing.

    ERIC Educational Resources Information Center

    DeZorzi, James M.

    1980-01-01

    The Middlesex County Board of Education in Hyde Park (Ontario) has developed an automated tendering/purchasing system for ordering standard items that has reduced by 80 percent the time required for tendering, evaluating, awarding, and ordering items. (Author/MLF)

  15. Automated conflict resolution issues

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey S.

    1991-01-01

    A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.

  16. ATC automation concepts

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1990-01-01

    Information on the design of human-centered tools for terminal area air traffic control (ATC) is given in viewgraph form. Information is given on payoffs and products, guidelines, ATC as a team process, automation tools for ATF, and the traffic management advisor.

  17. Automated Administrative Data Bases

    NASA Technical Reports Server (NTRS)

    Marrie, M. D.; Jarrett, J. R.; Reising, S. A.; Hodge, J. E.

    1984-01-01

    Improved productivity and more effective response to information requirements for internal management, NASA Centers, and Headquarters resulted from using automated techniques. Modules developed to provide information on manpower, RTOPS, full time equivalency, and physical space reduced duplication, increased communication, and saved time. There is potential for greater savings by sharing and integrating with those who have the same requirements.

  18. Automating Small Libraries.

    ERIC Educational Resources Information Center

    Swan, James

    1996-01-01

    Presents a four-phase plan for small libraries strategizing for automation: inventory and weeding, data conversion, implementation, and enhancements. Other topics include selecting a system, MARC records, compatibility, ease of use, industry standards, searching capabilities, support services, system security, screen displays, circulation modules,…

  19. Automated Lumber Processing

    Treesearch

    Powsiri Klinkhachorn; J. Moody; Philip A. Araman

    1995-01-01

    For the past few decades, researchers have devoted time and effort to apply automation and modern computer technologies towards improving the productivity of traditional industries. To be competitive, one must streamline operations and minimize production costs, while maintaining an acceptable margin of profit. This paper describes the effort of one such endeavor...

  20. Personnel Department Automation.

    ERIC Educational Resources Information Center

    Wilkinson, David

    In 1989, the Austin Independent School District's Office of Research and Evaluation was directed to monitor the automation of personnel information and processes in the district's Department of Personnel. Earlier, a study committee appointed by the Superintendent during the 1988-89 school year identified issues related to Personnel Department…

  1. Automated Accounting. Instructor Guide.

    ERIC Educational Resources Information Center

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  2. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  3. Validating Automated Speaking Tests

    ERIC Educational Resources Information Center

    Bernstein, Jared; Van Moere, Alistair; Cheng, Jian

    2010-01-01

    This paper presents evidence that supports the valid use of scores from fully automatic tests of spoken language ability to indicate a person's effectiveness in spoken communication. The paper reviews the constructs, scoring, and the concurrent validity evidence of "facility-in-L2" tests, a family of automated spoken language tests in Spanish,…

  4. Automated EEG acquisition

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.; Hillman, C. E., Jr.

    1977-01-01

    Automated self-contained portable device can be used by technicians with minimal training. Data acquired from patient at remote site are transmitted to centralized interpretation center using conventional telephone equipment. There, diagnostic information is analyzed, and results are relayed back to remote site.

  5. Automated Inadvertent Intruder Application

    SciTech Connect

    Koffman, Larry D.; Lee, Patricia L.; Cook, James R.; Wilhite, Elmer L.

    2008-01-15

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  6. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  7. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  8. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  9. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  10. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  11. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  12. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  13. High-Throughput Automation in Chemical Process Development.

    PubMed

    Selekman, Joshua A; Qiu, Jun; Tran, Kristy; Stevens, Jason; Rosso, Victor; Simmons, Eric; Xiao, Yi; Janey, Jacob

    2017-06-07

    High-throughput (HT) techniques built upon laboratory automation technology and coupled to statistical experimental design and parallel experimentation have enabled the acceleration of chemical process development across multiple industries. HT technologies are often applied to interrogate wide, often multidimensional experimental spaces to inform the design and optimization of any number of unit operations that chemical engineers use in process development. In this review, we outline the evolution of HT technology and provide a comprehensive overview of how HT automation is used throughout different industries, with a particular focus on chemical and pharmaceutical process development. In addition, we highlight the common strategies of how HT automation is incorporated into routine development activities to maximize its impact in various academic and industrial settings.

  14. Parallel imaging microfluidic cytometer.

    PubMed

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  15. A parallel programming environment supporting multiple data-parallel modules

    SciTech Connect

    Seevers, B.K.; Quinn, M.J. ); Hatcher, P.J. )

    1992-10-01

    We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The progammer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer.

  16. Combinatorial parallel and scientific computing.

    SciTech Connect

    Pinar, Ali; Hendrickson, Bruce Alan

    2005-04-01

    Combinatorial algorithms have long played a pivotal enabling role in many applications of parallel computing. Graph algorithms in particular arise in load balancing, scheduling, mapping and many other aspects of the parallelization of irregular applications. These are still active research areas, mostly due to evolving computational techniques and rapidly changing computational platforms. But the relationship between parallel computing and discrete algorithms is much richer than the mere use of graph algorithms to support the parallelization of traditional scientific computations. Important, emerging areas of science are fundamentally discrete, and they are increasingly reliant on the power of parallel computing. Examples include computational biology, scientific data mining, and network analysis. These applications are changing the relationship between discrete algorithms and parallel computing. In addition to their traditional role as enablers of high performance, combinatorial algorithms are now customers for parallel computing. New parallelization techniques for combinatorial algorithms need to be developed to support these nontraditional scientific approaches. This chapter will describe some of the many areas of intersection between discrete algorithms and parallel scientific computing. Due to space limitations, this chapter is not a comprehensive survey, but rather an introduction to a diverse set of techniques and applications with a particular emphasis on work presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing. Some topics highly relevant to this chapter (e.g. load balancing) are addressed elsewhere in this book, and so we will not discuss them here.

  17. Automated target preparation for microarray-based gene expression analysis.

    PubMed

    Raymond, Frédéric; Metairon, Sylviane; Borner, Roland; Hofmann, Markus; Kussmann, Martin

    2006-09-15

    DNA microarrays have rapidly evolved toward a platform for massively paralleled gene expression analysis. Despite its widespread use, the technology has been criticized to be vulnerable to technical variability. Addressing this issue, recent comparative, interplatform, and interlaboratory studies have revealed that, given defined procedures for "wet lab" experiments and data processing, a satisfactory reproducibility and little experimental variability can be achieved. In view of these advances in standardization, the requirement for uniform sample preparation becomes evident, especially if a microarray platform is used as a facility, i.e., by different users working in the laboratory. While one option to reduce technical variability is to dedicate one laboratory technician to all microarray studies, we have decided to automate the entire RNA sample preparation implementing a liquid handling system coupled to a thermocycler and a microtiter plate reader. Indeed, automated RNA sample preparation prior to chip analysis enables (1) the reduction of experimentally caused result variability, (2) the separation of (important) biological variability from (undesired) experimental variation, and (3) interstudy comparison of gene expression results. Our robotic platform can process up to 24 samples in parallel, using an automated sample preparation method that produces high-quality biotin-labeled cRNA ready to be hybridized on Affymetrix GeneChips. The results show that the technical interexperiment variation is less pronounced than with manually prepared samples. Moreover, experiments using the same starting material showed that the automated process yields a good reproducibility between samples.

  18. Automated measurement and quantification of heterotrophic bacteria in water samples based on the MPN method.

    PubMed

    Fuchsluger, C; Preims, M; Fritz, I

    2011-01-01

    Quantification of heterotrophic bacteria is a widely used measure for water analysis. Especially in terms of drinking water analysis, testing for microorganisms is strictly regulated by the European Drinking Water Directive, including quality criteria and detection limits. The quantification procedure presented in this study is based on the most probable number (MPN) method, which was adapted to comply with the need for a quick and easy screening tool for different kinds of water samples as well as varying microbial loads. Replacing tubes with 24-well titer plates for cultivation of bacteria drastically reduces the amount of culture media and also simplifies incubation. Automated photometric measurement of turbidity instead of visual evaluation of bacterial growth avoids misinterpretation by operators. Definition of a threshold ensures definite and user-independent determination of microbial growth. Calculation of the MPN itself is done using a program provided by the US Food and Drug Administration (FDA). For evaluation of the method, real water samples of different origins as well as pure cultures of bacteria were analyzed in parallel with the conventional plating methods. Thus, the procedure described requires less preparation time, reduces costs and ensures both stable and reliable results for water samples.

  19. Mass culture of photobacteria to obtain luciferase

    NASA Technical Reports Server (NTRS)

    Chappelle, E. W.; Picciolo, G. L.; Rich, E., Jr.

    1969-01-01

    Inoculating preheated trays containing nutrient agar with photobacteria provides a means for mass culture of aerobic microorganisms in order to obtain large quantities of luciferase. To determine optimum harvest time, growth can be monitored by automated light-detection instrumentation.

  20. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  1. Automated optical assembly

    NASA Astrophysics Data System (ADS)

    Bala, John L.

    1995-08-01

    Automation and polymer science represent fundamental new technologies which can be directed toward realizing the goal of establishing a domestic, world-class, commercial optics business. Use of innovative optical designs using precision polymer optics will enable the US to play a vital role in the next generation of commercial optical products. The increased cost savings inherent in the utilization of optical-grade polymers outweighs almost every advantage of using glass for high volume situations. Optical designers must gain experience with combined refractive/diffractive designs and broaden their knowledge base regarding polymer technology beyond a cursory intellectual exercise. Implementation of a fully automated assembly system, combined with utilization of polymer optics, constitutes the type of integrated manufacturing process which will enable the US to successfully compete with the low-cost labor employed in the Far East, as well as to produce an equivalent product.

  2. Automated macromolecular crystallization screening

    DOEpatents

    Segelke, Brent W.; Rupp, Bernhard; Krupka, Heike I.

    2005-03-01

    An automated macromolecular crystallization screening system wherein a multiplicity of reagent mixes are produced. A multiplicity of analysis plates is produced utilizing the reagent mixes combined with a sample. The analysis plates are incubated to promote growth of crystals. Images of the crystals are made. The images are analyzed with regard to suitability of the crystals for analysis by x-ray crystallography. A design of reagent mixes is produced based upon the expected suitability of the crystals for analysis by x-ray crystallography. A second multiplicity of mixes of the reagent components is produced utilizing the design and a second multiplicity of reagent mixes is used for a second round of automated macromolecular crystallization screening. In one embodiment the multiplicity of reagent mixes are produced by a random selection of reagent components.

  3. The automated command transmission

    NASA Astrophysics Data System (ADS)

    Inoue, Y.; Satoh, S.

    A technique for automated command transmission (ACT) to GEO-stationed satellites is presented. The system is intended for easing the command center workload. The ACT system determines the relation of the commands to on-board units, connects the telemetry with on-board units, defines the control path on the spacecraft, identifies the correspondence of back-up units to primary units, and ascertains sunlight or eclipse conditions. The system also has the address of satellite and command decoders, the ID and content for the mission command sequence, group and inhibit codes, a listing of all available commands, and restricts the data to a command sequence. Telemetry supplies data for automated problem correction. All other missions operations are terminated during system recovery data processing after a crash. The ACT system is intended for use with the GMS spacecraft.

  4. Automated gas chromatography

    DOEpatents

    Mowry, C.D.; Blair, D.S.; Rodacy, P.J.; Reber, S.D.

    1999-07-13

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute. 7 figs.

  5. Automated Pollution Control

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Patterned after the Cassini Resource Exchange (CRE), Sholtz and Associates established the Automated Credit Exchange (ACE), an Internet-based concept that automates the auctioning of "pollution credits" in Southern California. An early challenge of the Jet Propulsion Laboratory's Cassini mission was allocating the spacecraft's resources. To support the decision-making process, the CRE was developed. The system removes the need for the science instrument manager to know the individual instruments' requirements for the spacecraft resources. Instead, by utilizing principles of exchange, the CRE induces the instrument teams to reveal their requirements. In doing so, they arrive at an efficient allocation of spacecraft resources by trading among themselves. A Southern California RECLAIM air pollution credit trading market has been set up using same bartering methods utilized in the Cassini mission in order to help companies keep pollution and costs down.

  6. Automated assembly in space

    NASA Technical Reports Server (NTRS)

    Srivastava, Sandanand; Dwivedi, Suren N.; Soon, Toh Teck; Bandi, Reddy; Banerjee, Soumen; Hughes, Cecilia

    1989-01-01

    The installation of robots and their use of assembly in space will create an exciting and promising future for the U.S. Space Program. The concept of assembly in space is very complicated and error prone and it is not possible unless the various parts and modules are suitably designed for automation. Certain guidelines are developed for part designing and for an easy precision assembly. Major design problems associated with automated assembly are considered and solutions to resolve these problems are evaluated in the guidelines format. Methods for gripping and methods for part feeding are developed with regard to the absence of gravity in space. The guidelines for part orientation, adjustments, compliances and various assembly construction are discussed. Design modifications of various fasteners and fastening methods are also investigated.

  7. Terminal automation system maintenance

    SciTech Connect

    Coffelt, D.; Hewitt, J.

    1997-01-01

    Nothing has improved petroleum product loading in recent years more than terminal automation systems. The presence of terminal automation systems (TAS) at loading racks has increased operational efficiency and safety and enhanced their accounting and management capabilities. However, like all finite systems, they occasionally malfunction or fail. Proper servicing and maintenance can minimize this. And in the unlikely event a TAS breakdown does occur, prompt and effective troubleshooting can reduce its impact on terminal productivity. To accommodate around-the-clock loading at racks, increasingly unattended by terminal personnel, TAS maintenance, servicing and troubleshooting has become increasingly demanding. It has also become increasingly important. After 15 years of trial and error at petroleum and petrochemical storage and transfer terminals, a number of successful troubleshooting programs have been developed. These include 24-hour {open_quotes}help hotlines,{close_quotes} internal (terminal company) and external (supplier) support staff, and {open_quotes}layered{close_quotes} support. These programs are described.

  8. Automated campaign system

    NASA Astrophysics Data System (ADS)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  9. Automated chemiluminescence immunoassay measurements

    NASA Astrophysics Data System (ADS)

    Khalil, Omar S.; Mattingly, G. P.; Genger, K.; Mackowiak, J.; Butler, J.; Pepe, C.; Zurek, T. F.; Abunimeh, N.

    1993-06-01

    Chemiluminescence (CL) detection offers potential for high sensitivity immunoassays (CLIAs). Several approaches were attempted to automate CL measurements. Those include the use of photographic film, clear microtitration plates, and magnetic separation. We describe a photon counting detection apparatus that performs (CLIA) measurements. The CL detector moves toward a disposable reaction vessel to create a light-tight seal and then triggers and integrates a CL signal. The capture uses antibody coated polystyrene microparticles. A porous matrix, which is a part of a disposable reaction tray, entraps the microparticle-captured reaction product. The CL signal emanated off the immune complex immobilized by the porous matrix is detected. The detection system is a part of a fully automated immunoassay analyzer. Methods of achieving high sensitivities are discussed.

  10. Automated Chromosome Breakage Assessment

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth

    1985-01-01

    An automated karyotyping machine was built at JPL in 1972. It does computerized karyotyping, but it has some hardware limitations. The image processing hardware that was available at a reasonable price in 1972 was marginal, at best, for this job. In the meantime, NASA has developed an interest in longer term spaceflights and an interest in using chromosome breakage studies as a dosimeter for radiation or perhaps other damage that might occur to the tissues. This uses circulating lymphocytes as a physiological dosimeter looking for chromosome breakage on long-term spaceflights. For that reason, we have reactivated the automated karyotyping work at JPL. An update on that work, and a description of where it appears to be headed is presented.

  11. Automated Chromosome Breakage Assessment

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth

    1985-01-01

    An automated karyotyping machine was built at JPL in 1972. It does computerized karyotyping, but it has some hardware limitations. The image processing hardware that was available at a reasonable price in 1972 was marginal, at best, for this job. In the meantime, NASA has developed an interest in longer term spaceflights and an interest in using chromosome breakage studies as a dosimeter for radiation or perhaps other damage that might occur to the tissues. This uses circulating lymphocytes as a physiological dosimeter looking for chromosome breakage on long-term spaceflights. For that reason, we have reactivated the automated karyotyping work at JPL. An update on that work, and a description of where it appears to be headed is presented.

  12. The automation of science.

    PubMed

    King, Ross D; Rowland, Jem; Oliver, Stephen G; Young, Michael; Aubrey, Wayne; Byrne, Emma; Liakata, Maria; Markham, Magdalena; Pir, Pinar; Soldatova, Larisa N; Sparkes, Andrew; Whelan, Kenneth E; Clare, Amanda

    2009-04-03

    The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist "Adam," which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.

  13. Automated Assembly Center (AAC)

    NASA Technical Reports Server (NTRS)

    Stauffer, Robert J.

    1993-01-01

    The objectives of this project are as follows: to integrate advanced assembly and assembly support technology under a comprehensive architecture; to implement automated assembly technologies in the production of high-visibility DOD weapon systems; and to document the improved cost, quality, and lead time. This will enhance the production of DOD weapon systems by utilizing the latest commercially available technologies combined into a flexible system that will be able to readily incorporate new technologies as they emerge. Automated assembly encompasses the following areas: product data, process planning, information management policies and framework, three schema architecture, open systems communications, intelligent robots, flexible multi-ability end effectors, knowledge-based/expert systems, intelligent workstations, intelligent sensor systems, and PDES/PDDI data standards.

  14. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  15. Automated RTOP Management System

    NASA Technical Reports Server (NTRS)

    Hayes, P.

    1984-01-01

    The structure of NASA's Office of Aeronautics and Space Technology electronic information system network from 1983 to 1985 is illustrated. The RTOP automated system takes advantage of existing hardware, software, and expertise, and provides: (1) computerized cover sheet and resources forms; (2) electronic signature and transmission; (3) a data-based information system; (4) graphics; (5) intercenter communications; (6) management information; and (7) text editing. The system is coordinated with Headquarters efforts in codes R,E, and T.

  16. Automated RSO Stability Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, T.

    2016-09-01

    A methodology for assessing the attitude stability of a Resident Space Object (RSO) using visual magnitude data is presented and then scaled to run in an automated fashion across the entire satellite catalog. Results obtained by applying the methodology to the Commercial Space Operations Center (COMSpOC) catalog are presented and summarized, identifying objects that have changed stability. We also examine the timeline for detecting the transition from stable to unstable attitude

  17. Automated Nitrocellulose Analysis

    DTIC Science & Technology

    1978-12-01

    is acceptable. (4) As would be expected from the theory of osmosis , a high saline content in the dialysis recipient stream (countersolution) is of...Block 39, II different from Report; IS. SUPPLEMENTARY NOTES IS. KEY WOROS (Continue on rereri Analysis Automated analysis Dialysis Glyceryl...Technicon AutoAnalyzer, involves aspiration of a stirred nitrocellulose suspension, dialysis against 9 percent saline, and hydrolysis with 5N sodium

  18. Cavendish Balance Automation

    NASA Technical Reports Server (NTRS)

    Thompson, Bryan

    2000-01-01

    This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.

  19. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Baxter, Doug

    1988-01-01

    The class of problems that can be effectively compiled by parallelizing compilers is discussed. This is accomplished with the doconsider construct which would allow these compilers to parallelize many problems in which substantial loop-level parallelism is available but cannot be detected by standard compile-time analysis. We describe and experimentally analyze mechanisms used to parallelize the work required for these types of loops. In each of these methods, a new loop structure is produced by modifying the loop to be parallelized. We also present the rules by which these loop transformations may be automated in order that they be included in language compilers. The main application area of the research involves problems in scientific computations and engineering. The workload used in our experiment includes a mixture of real problems as well as synthetically generated inputs. From our extensive tests on the Encore Multimax/320, we have reached the conclusion that for the types of workloads we have investigated, self-execution almost always performs better than pre-scheduling. Further, the improvement in performance that accrues as a result of global topological sorting of indices as opposed to the less expensive local sorting, is not very significant in the case of self-execution.

  20. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  1. Automation in biological crystallization.

    PubMed

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  2. Automation in biological crystallization

    PubMed Central

    Shaw Stewart, Patrick; Mueller-Dieckmann, Jochen

    2014-01-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given. PMID:24915074

  3. Autonomy, Automation, and Systems

    NASA Astrophysics Data System (ADS)

    Turner, Philip R.

    1987-02-01

    Aerospace industry interest in autonomy and automation, given fresh impetus by the national goal of establishing a Space Station, is becoming a major item of research and technology development. The promise of new technology arising from research in Artificial Intelligence (AI) has focused much attention on its potential in autonomy and automation. These technologies can improve performance in autonomous control functions that involve planning, scheduling, and fault diagnosis of complex systems. There are, however, many aspects of system and subsystem design in an autonomous system that impact AI applications, but do not directly involve AI technology. Development of a system control architecture, establishment of an operating system within the design, providing command and sensory data collection features appropriate to automated operation, and the use of design analysis tools to support system engineering are specific examples of major design issues. Aspects such as these must also receive attention and technology development support if we are to implement complex autonomous systems within the realistic limitations of mass, power, cost, and available flight-qualified technology that are all-important to a flight project.

  4. Automating existing stations

    SciTech Connect

    Little, J.E.

    1986-09-01

    The task was to automate 20 major compressor stations along ANR Pipeline Co.'s Southeastern and Southwestern pipelines in as many months. Meeting this schedule required standardized hardware and software design. Working with Bristol Babcock Co., ANR came up with an off-the-shelf station automation package suitable for a variety of compressor stations. The project involved 148 engines with 488,880-hp in the 20 stations. ANR Pipeline developed software for these engines and compressors, including horsepower prediction and efficiency. The system places processors ''intelligence'' at each station and engine to monitor and control operations. The station processor receives commands from the company's gas dispatch center at Detroit and informs dispatchers of alarms, conditions, and decision it makes. The automation system is controlled by the Detroit center through a central communications network. Operating orders from the center are sent to the station processor, which obeys orders using the most efficient means of operation at the station's disposal. In a malfunction, a control and communications backup system takes over. Commands and information are directly transmitted between the center and the individual compressor stations. Stations receive their orders based on throughput, with suction and discharge pressure overrides. Additionally, a discharge temperature override protects pipeline coatings.

  5. Automation of optical tweezers

    NASA Astrophysics Data System (ADS)

    Hsieh, Tseng-Ming; Chang, Bo-Jui; Hsu, Long

    2000-07-01

    Optical tweezers is a newly developed instrument, which makes possible the manipulation of micro-optical particles under a microscope. In this paper, we present the automation of an optical tweezers which consists of a modified optical tweezers, equipped with two motorized actuators to deflect a 1 W argon laser beam, and a computer control system including a joystick. The trapping of a single bead and a group of lactoacidofilus was shown, separately. With the aid of the joystick and two auxiliary cursers superimposed on the real-time image of a trapped bead, we demonstrated the simple and convenient operation of the automated optical tweezers. By steering the joystick and then pressing a button on it, we assign a new location for the trapped bead to move to. The increment of the motion 0.04 (mu) m for a 20X objective, is negligible. With a fast computer for image processing, the manipulation of the trapped bead is smooth and accurate. The automation of the optical tweezers is also programmable. This technique may be applied to accelerate the DNA hybridization in a gene chip. The combination of the modified optical tweezers with the computer control system provides a tool for precise manipulation of micro particles in many scientific fields.

  6. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  7. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  8. Automating FEA programming

    NASA Technical Reports Server (NTRS)

    Sharma, Naveen

    1992-01-01

    In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.

  9. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  10. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  11. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  12. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  13. AUTOMATED INADVERTENT INTRUDER APPLICATION

    SciTech Connect

    Koffman, L; Patricia Lee, P; Jim Cook, J; Elmer Wilhite, E

    2007-05-29

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  14. Exploration of the functional hierarchy of the basal layer of human epidermis at the single-cell level using parallel clonal microcultures of keratinocytes.

    PubMed

    Fortunel, Nicolas O; Cadio, Emmanuelle; Vaigot, Pierre; Chadli, Loubna; Moratille, Sandra; Bouet, Stéphan; Roméo, Paul-Henri; Martin, Michèle T

    2010-04-01

    The basal layer of human epidermis contains both stem cells and keratinocyte progenitors. Because of this cellular heterogeneity, the development of methods suitable for investigations at a clonal level is dramatically needed. Here, we describe a new method that allows multi-parallel clonal cultures of basal keratinocytes. Immediately after extraction from tissue samples, cells are sorted by flow cytometry based on their high integrin-alpha 6 expression and plated individually in microculture wells. This automated cell deposition process enables large-scale characterization of primary clonogenic capacities. The resulting clonal growth profile provided a precise assessment of basal keratinocyte hierarchy, as the size distribution of 14-day-old clones ranged from abortive to highly proliferative clones containing 1.7 x 10(5) keratinocytes (17.4 cell doublings). Importantly, these 14-day-old primary clones could be used to generate three-dimensional reconstructed epidermis with the progeny of a single cell. In long-term cultures, a fraction of highly proliferative clones could sustain extensive expansion of >100 population doublings over 14 weeks and exhibited long-term epidermis reconstruction potency, thus fulfilling candidate stem cell functional criteria. In summary, parallel clonal microcultures provide a relevant model for single-cell studies on interfollicular keratinocytes, which could be also used in other epithelial models, including hair follicle and cornea. The data obtained using this system support the hierarchical model of basal keratinocyte organization in human interfollicular epidermis.

  15. Automated Proactive Fault Isolation: A Key to Automated Commissioning

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2007-07-31

    In this paper, we present a generic model for automated continuous commissioing and then delve in detail into one of the processes, proactive testing for fault isolation, which is key to automating commissioning. The automated commissioining process uses passive observation-based fault detction and diagnostic techniques, followed by automated proactive testing for fault isolation, automated fault evaluation, and automated reconfiguration of controls together to continuously keep equipment controlled and running as intended. Only when hard failures occur or a physical replacement is required does the process require human intervention, and then sufficient information is provided by the automated commissioning system to target manual maintenance where it is needed. We then focus on fault isolation by presenting detailed logic that can be used to automatically isolate faults in valves, a common component in HVAC systems, as an example of how automated proactive fault isolation can be accomplished. We conclude the paper with a discussion of how this approach to isolating faults can be applied to other common HVAC components and their automated commmissioning and a summary of key conclusions of the paper.

  16. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  17. Parallel Computational Protein Design

    PubMed Central

    Zhou, Yichao; Donald, Bruce R.; Zeng, Jianyang

    2016-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab [1] to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE [2] and DEEPer [3] to also consider continuous backbone and side-chain flexibility. PMID:27914056

  18. Automation in organizations: Eternal conflict

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  19. Sequential and Parallel Matrix Computations.

    DTIC Science & Technology

    1985-11-01

    Theory" published by the American Math Society. (C) Jointly with A. Sameh of University of Illinois, a parallel algorithm for the single-input pole...an M.Sc. thesis at Northern Illinois University by Ava Chun and, the results were compared with parallel Q-R algorithm of Sameh and Kuck and the

  20. Parallel pseudospectral domain decomposition techniques

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Hirsh, Richard S.

    1988-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  1. Parallel pseudospectral domain decomposition techniques

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Hirsch, Richard S.

    1989-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  2. A Parallel Particle Swarm Optimizer

    DTIC Science & Technology

    2003-01-01

    by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.

  3. Preparative parallel protein purification (P4).

    PubMed

    Strömberg, Patrik; Rotticci-Mulder, Joke; Björnestedt, Robert; Schmidt, Stefan R

    2005-04-15

    In state of the art drug discovery, it is essential to gain structural information of pharmacologically relevant proteins. Increasing the output of novel protein structures requires improved preparative methods for high throughput (HT) protein purification. Currently, most HT platforms are limited to small-scale and available technology for increasing throughput at larger scales is scarce. We have adapted a 10-channel parallel flash chromatography system for protein purification applications. The system enables us to perform 10 different purifications in parallel with individual gradients and UV monitoring. Typical protein purification applications were set up. Methods for ion exchange chromatography were developed for different sample proteins and columns. Affinity chromatography was optimized for His-tagged proteins using metal chelating media and buffer exchange by gel filtration was also tested. The results from the present system were comparable, with respect to resolution and reproducibility, with those from control experiments on an AKTA purifier system. Finally, lysates from 10 E. coli cultures expressing different His-tagged proteins were subjected to a three-step parallel purification procedure, combining the above-mentioned procedures. Nine proteins were successfully purified whereas one failed probably due to lack of expression.

  4. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  5. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  6. Parallel Object-Oriented Framework Optimization

    SciTech Connect

    Quinlan, D

    2001-05-01

    Object-oriented libraries arise naturally from the increasing complexity of developing related scientific applications. The optimization of the use of libraries within scientific applications is one of many high-performance optimization, and is the subject of this paper. This type of optimization can have significant potential because it can either reduce the overhead of calls to a library, specialize the library calls given the context of their use within the application, or use the semantics of the library calls to locally rewrite sections of the application. This type of optimization is only now becoming an active area of research. The optimization of the use of libraries within scientific applications is particularly attractive because it maps to the extensive use of libraries within numerous large existing scientific applications sharing common problem domains. This paper presents an approach toward the optimization of parallel object-oriented libraries. ROSE[1] is a tool for building source-to-source preprocessors, ROSETTA is a tool for defining the grammars used within ROSE. The definition of the grammars directly determines what can be recognized at compile time. ROSETTA permits grammars to be automatically generated which are specific to the identification of abstractions introduced within object-oriented libraries. Thus the semantics of complex abstractions defined outside of the C++ language can be leveraged at compile time to introduce library specific optimizations. The details of the optimizations performed are not a part of this paper and are up to the library developer to define using ROSETTA and ROSE to build such an optimizing preprocessor. Within performance optimizations, if they are to be automated, the problems of automatically locating where such optimizations can be done are significant and most often overlooked. Note that a novel part of this work is the degree of automation. Thus library developers can be expected to be able to build their

  7. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-14

    ... Automated Commercial Environment (ACE) Simplified Entry: Modification of Participant Selection Criteria and... (NCAP) test concerning the simplified entry functionality in the Automated Commercial Environment (ACE...) National Customs Automation Program (NCAP) test concerning Automated Commercial Environment (ACE...

  8. Automated solid models from serial section images.

    PubMed

    Ho, C M; Vannier, M W; Bresina, S J

    1992-05-01

    A new method for creating unambiguous and complete boundary representation solid models with a hybrid polygonal/nonuniform rational B spline representation was developed and tested using computed tomography scans of the wrist. Polygon surface approximation was applied to a sequence of parallel planar outlines of individual bone elements in the wrist. An automated technique for the transformation of edge contours into solid models was implemented. This was performed using a custom batch file command sequence generator coupled to a commercially available mechanical computer-aided design and engineering software system known as I-DEAS (Structural Dynamics Research Corporation, Milford, OH). This transformation software allows the use of biomedical scan slice data with a solid modeler.

  9. Phaser.MRage: automated molecular replacement

    SciTech Connect

    Bunkóczi, Gábor; McCoy, Airlie J.; Oeffner, Robert D.; Read, Randy J.

    2013-11-01

    The functionality of the molecular-replacement pipeline phaser.MRage is introduced and illustrated with examples. Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.

  10. Automation of Data Traffic Control on DSM Architecture

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2001-01-01

    The design of distributed shared memory (DSM) computers liberates users from the duty to distribute data across processors and allows for the incremental development of parallel programs using, for example, OpenMP or Java threads. DSM architecture greatly simplifies the development of parallel programs having good performance on a few processors. However, to achieve a good program scalability on DSM computers requires that the user understand data flow in the application and use various techniques to avoid data traffic congestions. In this paper we discuss a number of such techniques, including data blocking, data placement, data transposition and page size control and evaluate their efficiency on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks. We also present a tool which automates the detection of constructs causing data congestions in Fortran array oriented codes and advises the user on code transformations for improving data traffic in the application.

  11. Parallel performance of a preconditioned CG solver for unstructured finite element applications

    SciTech Connect

    Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K.

    1994-12-31

    A parallel unstructured finite element (FE) implementation designed for message passing MIMD machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.

  12. World-wide distribution automation systems

    SciTech Connect

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  13. Automation for optics manufacturing

    NASA Astrophysics Data System (ADS)

    Pollicove, Harvey M.; Moore, Duncan T.

    1990-11-01

    The optics industry has not followed the lead of the machining and electronics industries in applying advances In computer aided engineering (CAE), computer assisted manufacturing (CAM), automation or quality management techniques. Automationbased on computer integrated manufacturing (CIM) and flexible machining systems (FMS) has been widely implemented In these industries. Optics continues to rely on standalone equipment that preserves the highly skilled, labor intensive optical fabrication systems developed in the 1940's. This paper describes development initiatives at the Center for Optics Manufacturing that will create computer integrated manufacturing technology and support processes for the optical industry.

  14. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    2000-01-01

    An automated propellant blending apparatus and method that uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation is discussed. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  15. [Automated anesthesia record system].

    PubMed

    Zhu, Tao; Liu, Jin

    2005-12-01

    Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.

  16. Automated fiber pigtailing machine

    DOEpatents

    Strand, Oliver T.; Lowry, Mark E.

    1999-01-01

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectonic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems.

  17. Automated fiber pigtailing machine

    DOEpatents

    Strand, O.T.; Lowry, M.E.

    1999-01-05

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectronic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems. 26 figs.

  18. Automated wire preparation system

    NASA Astrophysics Data System (ADS)

    McCulley, Deborah J.

    The first step toward an automated wire harness facility for the aerospace industry has been taken by implementing the Wire Vektor 2000 into the wire harness preparation area. An overview of the Wire Vektor 2000 is given, including the facilities for wire cutting, marking, and transporting, for wire end processing, and for system control. Production integration in the Wire Vektor 2000 system is addressed, considering the hardware/software debug system and the system throughput. The manufacturing changes that have to be made in implementing the Wire Vektor 2000 are discussed.

  19. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    1999-01-01

    An automated propellant blending apparatus and method uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  20. Automated chair-training of rhesus macaques.

    PubMed

    Ponce, C R; Genecin, M P; Perez-Melara, G; Livingstone, M S

    2016-04-01

    Neuroscience research on non-human primates usually requires the animals to sit in a chair. To do this, typically monkeys are fitted with collars and trained to enter the chairs using either a pole, leash and jump cage. Animals may initially show resistance and risk injury. We have developed an automated chair-training method that minimizes restraints to ease the animals into their chairs. We developed a method to automatically train animals to enter a primate chair and stick out their heads for neckplate placement. To do this, we fitted the chairs with Arduino microcontrollers coupled to a water-reward system and touch- and proximity sensors. We found that the animals responded well to the chair, partially entering the chair within hours, sitting inside the chair within days and allowing us to manually introduce a door and neck plate, all within 14-21 sessions. Although each session could last many hours, automation meant that actual training person-hours could be as little as half an hour per day. The biggest advantage was that animals showed little resistance to entering the chair, compared to monkeys trained by leash pulling. This automated chair-training method can take longer than the standard collar-and-leash approach, but multiple macaques can be trained in parallel with fewer person-hours. It is also a promising method for animal-use refinement and in our case, it was the only effective training approach for an animal suffering from a behavioral pathology. Copyright © 2016 Elsevier B.V. All rights reserved.