Sample records for automate test generation

  1. Automated Test Case Generation for an Autopilot Requirement Prototype

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  2. Automatic Generation of Test Oracles - From Pilot Studies to Application

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Smith, Ben

    1998-01-01

    There is a trend towards the increased use of automation in V&V. Automation can yield savings in time and effort. For critical systems, where thorough V&V is required, these savings can be substantial. We describe a progression from pilot studies to development and use of V&V automation. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V. These studies culminated in the development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing an Al planning system that is a key component of an autonomous spacecraft.

  3. Automated Test-Form Generation

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  4. Integrating Test-Form Formatting into Automated Test Assembly

    ERIC Educational Resources Information Center

    Diao, Qi; van der Linden, Wim J.

    2013-01-01

    Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…

  5. Experiments with Test Case Generation and Runtime Analysis

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Drusinsky, Doron; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareanu, Corina; Rosu, Grigore; Visser, Willem; Koga, Dennis (Technical Monitor)

    2003-01-01

    Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.

  6. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  7. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  8. Generating Test Templates via Automated Theorem Proving

    NASA Technical Reports Server (NTRS)

    Kancherla, Mani Prasad

    1997-01-01

    Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.

  9. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  10. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  11. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  12. Spaceport Command and Control System Automated Testing

    NASA Technical Reports Server (NTRS)

    Stein, Meriel

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.

  13. Spaceport Command and Control System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.

  14. Apparatus for automated testing of biological specimens

    DOEpatents

    Layne, Scott P.; Beugelsdijk, Tony J.

    1999-01-01

    An apparatus for performing automated testing of infections biological specimens is disclosed. The apparatus comprise a process controller for translating user commands into test instrument suite commands, and a test instrument suite comprising a means to treat the specimen to manifest an observable result, and a detector for measuring the observable result to generate specimen test results.

  15. Advanced E-O test capability for Army Next-Generation Automated Test System (NGATS)

    NASA Astrophysics Data System (ADS)

    Errea, S.; Grigor, J.; King, D. F.; Matis, G.; McHugh, S.; McKechnie, J.; Nehring, B.

    2015-05-01

    The Future E-O (FEO) program was established to develop a flexible, modular, automated test capability as part of the Next Generation Automatic Test System (NGATS) program to support the test and diagnostic needs of currently fielded U.S. Army electro-optical (E-O) devices, as well as being expandable to address the requirements of future Navy, Marine Corps and Air Force E-O systems. Santa Barbara infrared (SBIR) has designed, fabricated, and delivered three (3) prototype FEO for engineering and logistics evaluation prior to anticipated full-scale production beginning in 2016. In addition to presenting a detailed overview of the FEO system hardware design, features and testing capabilities, the integration of SBIR's EO-IR sensor and laser test software package, IRWindows 4™, into FEO to automate the test execution, data collection and analysis, archiving and reporting of results is also described.

  16. Test Generator for MATLAB Simulations

    NASA Technical Reports Server (NTRS)

    Henry, Joel

    2011-01-01

    MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.

  17. 40 CFR 53.22 - Generation of test atmospheres.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods for SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2 to subpart B of part 53 specifies preferred methods for generating test atmospheres and suggested...

  18. 40 CFR 53.22 - Generation of test atmospheres.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods for SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2 to subpart B of part 53 specifies preferred methods for generating test atmospheres and suggested...

  19. 40 CFR 53.22 - Generation of test atmospheres.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods for SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2 to subpart B of part 53 specifies preferred methods for generating test atmospheres and suggested...

  20. 40 CFR 53.22 - Generation of test atmospheres.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test concentration shall be verified. (b) The test atmosphere delivery system shall be designed and... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2...

  1. 40 CFR 53.22 - Generation of test atmospheres.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... test concentration shall be verified. (b) The test atmosphere delivery system shall be designed and... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2...

  2. Automated spot defect characterization in a field portable night vision goggle test set

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume

    2018-05-01

    This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.

  3. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  4. A compendium of controlled diffusion blades generated by an automated inverse design procedure

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1989-01-01

    A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.

  5. Developments to Increase the Performance, Operational Versatility and Automation of a Lunar Surface Manipulation System

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Jones, Thomas C.; Doggett, William R.; Roithmayr, Carlos M.; King, Bruce D.; Mikulas, Marting M.

    2009-01-01

    The objective of this paper is to describe and summarize the results of the development efforts for the Lunar Surface Manipulation System (LSMS) with respect to increasing the performance, operational versatility, and automation. Three primary areas of development are covered, including; the expansion of the operational envelope and versatility of the current LSMS test-bed, the design of a second generation LSMS, and the development of automation and remote control capability. The first generation LSMS, which has been designed, built, and tested both in lab and field settings, is shown to have increased range of motion and operational versatility. Features such as fork lift mode, side grappling of payloads, digging and positioning of lunar regolith, and a variety of special end effectors are described. LSMS operational viability depends on bei nagble to reposition its base from an initial position on the lander to a mobility chassis or fixed locations around the lunar outpost. Preliminary concepts are presented for the second generation LSMS design, which will perform this self-offload capability. Incorporating design improvements, the second generation will have longer reach and three times the payload capability, yet it will have approximately equivalent mass to the first generation. Lastly, this paper covers improvements being made to the control system of the LSMS test-bed, which is currently operated using joint velocity control with visual cues. These improvements include joint angle sensors, inverse kinematics, and automated controls.

  6. Automated ILA design for synchronous sequential circuits

    NASA Technical Reports Server (NTRS)

    Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.

    1991-01-01

    An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.

  7. Instructional Topics in Educational Measurement (ITEMS) Module: Using Automated Processes to Generate Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2013-01-01

    Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…

  8. Software for Automated Testing of Mission-Control Displays

    NASA Technical Reports Server (NTRS)

    OHagan, Brian

    2004-01-01

    MCC Display Cert Tool is a set of software tools for automated testing of computerterminal displays in spacecraft mission-control centers, including those of the space shuttle and the International Space Station. This software makes it possible to perform tests that are more thorough, take less time, and are less likely to lead to erroneous results, relative to tests performed manually. This software enables comparison of two sets of displays to report command and telemetry differences, generates test scripts for verifying telemetry and commands, and generates a documentary record containing display information, including version and corrective-maintenance data. At the time of reporting the information for this article, work was continuing to add a capability for validation of display parameters against a reconfiguration file.

  9. Implementation of and experiences with new automation

    PubMed Central

    Mahmud, Ifte; Kim, David

    2000-01-01

    In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at ‘get-go’, we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products. PMID:18924695

  10. Implementation of and experiences with new automation.

    PubMed

    Mahmud, I; Kim, D

    2000-01-01

    In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at 'get-go', we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products.

  11. Multiphasic Health Testing in the Clinic Setting

    PubMed Central

    LaDou, Joseph

    1971-01-01

    The economy of automated multiphasic health testing (amht) activities patterned after the high-volume Kaiser program can be realized in low-volume settings. amht units have been operated at daily volumes of 20 patients in three separate clinical environments. These programs have displayed economics entirely compatible with cost figures published by the established high-volume centers. This experience, plus the expanding capability of small, general purpose, digital computers (minicomputers) indicates that a group of six or more physicians generating 20 laboratory appraisals per day can economically justify a completely automated multiphasic health testing facility. This system would reside in the clinic or hospital where it is used and can be configured to do analyses such as electrocardiography and generate laboratory reports, and communicate with large computer systems in university medical centers. Experience indicates that the most effective means of implementing these benefits of automation is to make them directly available to the medical community with the physician playing the central role. Economic justification of a dedicated computer through low-volume health testing then allows, as a side benefit, automation of administrative as well as other diagnostic activities—for example, patient billing, computer-aided diagnosis, and computer-aided therapeutics. PMID:4935771

  12. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  13. Automated Test Requirement Document Generation

    DTIC Science & Technology

    1987-11-01

    DIAGNOSTICS BASED ON THE PRINCIPLES OF ARTIFICIAL INTELIGENCE ", 1984 International Test Conference, 01Oct84, (A3, 3, Cs D3, E2, G2, H2, 13, J6, K) 425...j0O GLOSSARY OF ACRONYMS 0 ABBREVIATION DEFINITION AFSATCOM Air Force Satellite Communication Al Artificial Intelligence ASIC Application Specific...In-Test Equipment (BITE) and AI ( Artificial Intelligence) - Expert Systems - need to be fully applied before a completely automated process can be

  14. Contamination with HIV antibody may be responsible for false positive results in specimens tested on automated platforms running HIV 4th generation assays in a region of high HIV prevalence.

    PubMed

    Hardie, Diana Ruth; Korsman, Stephen N; Hsiao, Nei-Yuan; Morobadi, Molefi Daniel; Vawda, Sabeehah; Goedhals, Dominique

    2017-01-01

    In South Africa where the prevalence of HIV infection is very high, 4th generation HIV antibody/p24 antigen combo immunoassays are the tests of choice for laboratory based screening. Testing is usually performed in clinical pathology laboratories on automated analysers. To investigate the cause of false positive results on 4th generation HIV testing platforms in public sector laboratories, the performance of two automated platforms was compared in a clinical pathology setting, firstly on routine diagnostic specimens and secondly on known sero-negative samples. Firstly, 1181 routine diagnostic specimens were sequentially tested on Siemens and Roche automated 4th generation platforms. HIV viral load, western blot and follow up testing were used to determine the true status of inconclusive specimens. Subsequently, known HIV seronegative samples from a single donor were repeatedly tested on both platforms and an analyser was tested for surface contamination with HIV positive serum to identify how suspected specimen contamination could be occurring. Serial testing of diagnostic specimens yielded 163 weakly positive or discordant results. Only 3 of 163 were conclusively shown to indicate true HIV infection. Specimen contamination with HIV antibody was suspected, based on the following evidence: the proportion of positive specimens increased on repeated passage through the analysers; viral loads were low or undetectable and western blots negative or indeterminate on problem specimens; screen negative, 2nd test positive specimens tested positive when reanalysed on the screening assay; follow up specimens (where available) were negative. Similarly, an increasing number of known negative specimens became (repeatedly) sero-positive on serial passage through one of the analysers. Internal and external analyser surfaces were contaminated with HIV serum, evidence that sample splashes occur during testing. Due to the extreme sensitivity of these assays, contamination with minute amounts of HIV antibody can cause a negative sample to test positive. Better contamination control measures are needed on analysers used in clinical pathology environments, especially in regions where HIV sero-prevalence is high.

  15. Spaceport Command and Control System Software Development

    NASA Technical Reports Server (NTRS)

    Glasser, Abraham

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires a large amount of intensive testing that will properly measure the capabilities of the system. Automating the test procedures would save the project money from human labor costs, as well as making the testing process more efficient. Therefore, the Exploration Systems Division (formerly the Electrical Engineering Division) at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.

  16. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  17. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  18. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  19. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  20. Development of an automated fuzing station for the future armored resupply vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chesser, J.B.; Jansen, J.F.; Lloyd, P.D.

    1995-03-01

    The US Army is developing the Advanced Field Artillery System (SGSD), a next generation armored howitzer. The Future Armored Resupply Vehicle (FARV) will be its companion ammunition resupply vehicle. The FARV with automate the supply of ammunition and fuel to the AFAS which will increase capabilities over the current system. One of the functions being considered for automation is ammunition processing. Oak Ridge National Laboratory is developing equipment to demonstrate automated ammunition processing. One of the key operations to be automated is fuzing. The projectiles are initially unfuzed, and a fuze must be inserted and threaded into the projectile asmore » part of the processing. A constraint on the design solution is that the ammunition cannot be modified to simplify automation. The problem was analyzed to determine the alignment requirements. Using the results of the analysis, ORNL designed, built, and tested a test stand to verify the selected design solution.« less

  1. Automated synthesis, insertion and detection of polyps for CT colonography

    NASA Astrophysics Data System (ADS)

    Sezille, Nicolas; Sadleir, Robert J. T.; Whelan, Paul F.

    2003-03-01

    CT Colonography (CTC) is a new non-invasive colon imaging technique which has the potential to replace conventional colonoscopy for colorectal cancer screening. A novel system which facilitates automated detection of colorectal polyps at CTC is introduced. As exhaustive testing of such a system using real patient data is not feasible, more complete testing is achieved through synthesis of artificial polyps and insertion into real datasets. The polyp insertion is semi-automatic: candidate points are manually selected using a custom GUI, suitable points are determined automatically from an analysis of the local neighborhood surrounding each of the candidate points. Local density and orientation information are used to generate polyps based on an elliptical model. Anomalies are identified from the modified dataset by analyzing the axial images. Detected anomalies are classified as potential polyps or natural features using 3D morphological techniques. The final results are flagged for review. The system was evaluated using 15 scenarios. The sensitivity of the system was found to be 65% with 34% false positive detections. Automated diagnosis at CTC is possible and thorough testing is facilitated by augmenting real patient data with computer generated polyps. Ultimately, automated diagnosis will enhance standard CTC and increase performance.

  2. Description of the testing process for the Automated Residential Energy Standard (ARES) in support of proposed interim energy conservation voluntary performance standards for new non-federal residential buildings: Volume 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    In this report, the scope of the tests, the method of analysis, the results, and the conclusions are discussed. The first test indicated that the requirements generated by the Standard procedures and formulae appear to yield reasonable results, although some of the cost data provided as defaults in the Standard should be reevaluated. The second test provided experience that was useful in modifying the points compliance format, but did not uncover any procedural issues that would lead to unreasonable results. These conclusions are based on analysis using the Automated Residential Energy Standard (ARES) computer program, developed to simplify the processmore » of standards generation.« less

  3. Automated pulmonary lobar ventilation measurements using volume-matched thoracic CT and MRI

    NASA Astrophysics Data System (ADS)

    Guo, F.; Svenningsen, S.; Bluemke, E.; Rajchl, M.; Yuan, J.; Fenster, A.; Parraga, G.

    2015-03-01

    Objectives: To develop and evaluate an automated registration and segmentation pipeline for regional lobar pulmonary structure-function measurements, using volume-matched thoracic CT and MRI in order to guide therapy. Methods: Ten subjects underwent pulmonary function tests and volume-matched 1H and 3He MRI and thoracic CT during a single 2-hr visit. CT was registered to 1H MRI using an affine method that incorporated block-matching and this was followed by a deformable step using free-form deformation. The resultant deformation field was used to deform the associated CT lobe mask that was generated using commercial software. 3He-1H image registration used the same two-step registration method and 3He ventilation was segmented using hierarchical k-means clustering. Whole lung and lobar 3He ventilation and ventilation defect percent (VDP) were generated by mapping ventilation defects to CT-defined whole lung and lobe volumes. Target CT-3He registration accuracy was evaluated using region- , surface distance- and volume-based metrics. Automated whole lung and lobar VDP was compared with semi-automated and manual results using paired t-tests. Results: The proposed pipeline yielded regional spatial agreement of 88.0+/-0.9% and surface distance error of 3.9+/-0.5 mm. Automated and manual whole lung and lobar ventilation and VDP were not significantly different and they were significantly correlated (r = 0.77, p < 0.0001). Conclusion: The proposed automated pipeline can be used to generate regional pulmonary structural-functional maps with high accuracy and robustness, providing an important tool for image-guided pulmonary interventions.

  4. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  5. Design and implementation of Ada programs to facilitate automated testing

    NASA Technical Reports Server (NTRS)

    Dean, Jack; Fox, Barry; Oropeza, Michael

    1991-01-01

    An automated method utilized to test the software components of COMPASS, an interactive computer aided scheduling system, is presented. Each package of this system introduces a private type, and works to construct instances of that type, along with read and write routines for that type. Generic procedures that can generate test drivers for these functions are given and show how the test drivers can read from a test data file the functions to call, the arguments for those functions, what the anticipated result should be, and whether an exception should be raised for the function given the arguments.

  6. Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline

    ERIC Educational Resources Information Center

    Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.

    2012-01-01

    The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…

  7. Automated unit-level testing with heuristic rules

    NASA Technical Reports Server (NTRS)

    Carlisle, W. Homer; Chang, Kai-Hsiung; Cross, James H.; Keleher, William; Shackelford, Keith

    1990-01-01

    Software testing plays a significant role in the development of complex software systems. Current testing methods generally require significant effort to generate meaningful test cases. The QUEST/Ada system is a prototype system designed using CLIPS to experiment with expert system based test case generation. The prototype is designed to test for condition coverage, and attempts to generate test cases to cover all feasible branches contained in an Ada program. This paper reports on heuristics sued by the system. These heuristics vary according to the amount of knowledge obtained by preprocessing and execution of the boolean conditions in the program.

  8. A digital flight control system verification laboratory

    NASA Technical Reports Server (NTRS)

    De Feo, P.; Saib, S.

    1982-01-01

    A NASA/FAA program has been established for the verification and validation of digital flight control systems (DFCS), with the primary objective being the development and analysis of automated verification tools. In order to enhance the capabilities, effectiveness, and ease of using the test environment, software verification tools can be applied. Tool design includes a static analyzer, an assertion generator, a symbolic executor, a dynamic analysis instrument, and an automated documentation generator. Static and dynamic tools are integrated with error detection capabilities, resulting in a facility which analyzes a representative testbed of DFCS software. Future investigations will ensue particularly in the areas of increase in the number of software test tools, and a cost effectiveness assessment.

  9. Automated Item Generation with Recurrent Neural Networks.

    PubMed

    von Davier, Matthias

    2018-03-12

    Utilizing technology for automated item generation is not a new idea. However, test items used in commercial testing programs or in research are still predominantly written by humans, in most cases by content experts or professional item writers. Human experts are a limited resource and testing agencies incur high costs in the process of continuous renewal of item banks to sustain testing programs. Using algorithms instead holds the promise of providing unlimited resources for this crucial part of assessment development. The approach presented here deviates in several ways from previous attempts to solve this problem. In the past, automatic item generation relied either on generating clones of narrowly defined item types such as those found in language free intelligence tests (e.g., Raven's progressive matrices) or on an extensive analysis of task components and derivation of schemata to produce items with pre-specified variability that are hoped to have predictable levels of difficulty. It is somewhat unlikely that researchers utilizing these previous approaches would look at the proposed approach with favor; however, recent applications of machine learning show success in solving tasks that seemed impossible for machines not too long ago. The proposed approach uses deep learning to implement probabilistic language models, not unlike what Google brain and Amazon Alexa use for language processing and generation.

  10. Clinical Chemistry Laboratory Automation in the 21st Century - Amat Victoria curam (Victory loves careful preparation)

    PubMed Central

    Armbruster, David A; Overcash, David R; Reyes, Jaime

    2014-01-01

    The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory, limited only by the imagination and ingenuity of laboratory scientists. PMID:25336760

  11. Clinical Chemistry Laboratory Automation in the 21st Century - Amat Victoria curam (Victory loves careful preparation).

    PubMed

    Armbruster, David A; Overcash, David R; Reyes, Jaime

    2014-08-01

    The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory, limited only by the imagination and ingenuity of laboratory scientists.

  12. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease.

    PubMed

    Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk

    2016-06-01

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.

  13. Design and Testing of Suit Regulator Test Rigs

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2010-01-01

    The next generation space suit requires additional capabilities for controlling and adjusting internal pressure compared to that of historical designs. Next generation suit pressures will range from slight pressure, for astronaut prebreathe comfort, to hyperbaric pressure levels for emergency medical treatment of decompression sickness. In order to test these regulators through-out their development life cycle, novel automated test rigs are being developed. This paper addresses the design philosophy, performance requirements, physical implementation, and test results with various units under test.

  14. Combining Archetypes, Ontologies and Formalization Enables Automated Computation of Quality Indicators.

    PubMed

    Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald

    2017-01-01

    ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.

  15. Optimizing Decision Preparedness by Adapting Scenario Complexity and Automating Scenario Generation

    NASA Technical Reports Server (NTRS)

    Dunne, Rob; Schatz, Sae; Flore, Stephen M.; Nicholson, Denise

    2011-01-01

    Klein's recognition-primed decision (RPD) framework proposes that experts make decisions by recognizing similarities between current decision situations and previous decision experiences. Unfortunately, military personnel arQ often presented with situations that they have not experienced before. Scenario-based training (S8T) can help mitigate this gap. However, SBT remains a challenging and inefficient training approach. To address these limitations, the authors present an innovative formulation of scenario complexity that contributes to the larger research goal of developing an automated scenario generation system. This system will enable trainees to effectively advance through a variety of increasingly complex decision situations and experiences. By adapting scenario complexities and automating generation, trainees will be provided with a greater variety of appropriately calibrated training events, thus broadening their repositories of experience. Preliminary results from empirical testing (N=24) of the proof-of-concept formula are presented, and future avenues of scenario complexity research are also discussed.

  16. Time-Motion Analysis of Four Automated Systems for the Detection of Chlamydia trachomatis and Neisseria gonorrhoeae by Nucleic Acid Amplification Testing.

    PubMed

    Williams, James A; Eddleman, Laura; Pantone, Amy; Martinez, Regina; Young, Stephen; Van Der Pol, Barbara

    2014-08-01

    Next-generation diagnostics for Chlamydia trachomatis and Neisseria gonorrhoeae are available on semi- or fully-automated platforms. These systems require less hands-on time than older platforms and are user friendly. Four automated systems, the ABBOTT m2000 system, Becton Dickinson Viper System with XTR Technology, Gen-Probe Tigris DTS system, and Roche cobas 4800 system, were evaluated for total run time, hands-on time, and walk-away time. All of the systems evaluated in this time-motion study were able to complete a diagnostic test run within an 8-h work shift, instrument setup and operation were straightforward and uncomplicated, and walk-away time ranged from approximately 90 to 270 min in a head-to-head comparison of each system. All of the automated systems provide technical staff with increased time to perform other tasks during the run, offer easy expansion of the diagnostic test menu, and have the ability to increase specimen throughput. © 2013 Society for Laboratory Automation and Screening.

  17. An Ada programming support environment

    NASA Technical Reports Server (NTRS)

    Tyrrill, AL; Chan, A. David

    1986-01-01

    The toolset of an Ada Programming Support Environment (APSE) being developed at North American Aircraft Operations (NAAO) of Rockwell International, is described. The APSE is resident on three different hosts and must support developments for the hosts and for embedded targets. Tools and developed software must be freely portable between the hosts. The toolset includes the usual editors, compilers, linkers, debuggers, configuration magnagers, and documentation tools. Generally, these are being supplied by the host computer vendors. Other tools, for example, pretty printer, cross referencer, compilation order tool, and management tools were obtained from public-domain sources, are implemented in Ada and are being ported to the hosts. Several tools being implemented in-house are of interest, these include an Ada Design Language processor based on compilable Ada. A Standalone Test Environment Generator facilitates test tool construction and partially automates unit level testing. A Code Auditor/Static Analyzer permits the Ada programs to be evaluated against measures of quality. An Ada Comment Box Generator partially automates generation of header comment boxes.

  18. Launch Control System Software Development System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.

  19. Improving Flight Software Module Validation Efforts : a Modular, Extendable Testbed Software Framework

    NASA Technical Reports Server (NTRS)

    Lange, R. Connor

    2012-01-01

    Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.

  20. Automated screening of propulsion system test data by neural networks, phase 1

    NASA Technical Reports Server (NTRS)

    Hoyt, W. Andes; Whitehead, Bruce A.

    1992-01-01

    The evaluation of propulsion system test and flight performance data involves reviewing an extremely large volume of sensor data generated by each test. An automated system that screens large volumes of data and identifies propulsion system parameters which appear unusual or anomalous will increase the productivity of data analysis. Data analysts may then focus on a smaller subset of anomalous data for further evaluation of propulsion system tests. Such an automated data screening system would give NASA the benefit of a reduction in the manpower and time required to complete a propulsion system data evaluation. A phase 1 effort to develop a prototype data screening system is reported. Neural networks will detect anomalies based on nominal propulsion system data only. It appears that a reasonable goal for an operational system would be to screen out 95 pct. of the nominal data, leaving less than 5 pct. needing further analysis by human experts.

  1. Fully automated treatment planning for head and neck radiotherapy using a voxel-based dose prediction and dose mimicking method

    NASA Astrophysics Data System (ADS)

    McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.

    2017-08-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.

  2. Fully automated treatment planning for head and neck radiotherapy using a voxel-based dose prediction and dose mimicking method.

    PubMed

    McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G

    2017-07-06

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.

  3. Feasibility of Carbon Fiber/PEEK Composites for Cryogenic Fuel Tank Applications

    NASA Astrophysics Data System (ADS)

    Doyle, K.; Doyle, A.; O Bradaigh, C. M.; Jaredson, D.

    2012-07-01

    This paper investigates the feasibility of CF/PEEK composites for manufacture of cryogenic fuel tanks for Next Generation Space Launchers. The material considered is CF/PEEK tape from Suprem SA and the proposed manufacturing process for the fuel tank is Automated Tape Placement. Material characterization was carried out on test laminates manufactured in an autoclave and also by Automated Tape Placement with in-situ consolidation. The results of the two processes were compared to establish if there is any knock down in properties for the automated tape placement process. A permeability test rig was setup with a helium leak detector and the effect of thermal cycling on the permeability properties of CF/PEEK was measured. A 1/10th scale demonstrator was designed and manufactured consisting of a cylinder manufactured by automated tape placement and an upper dome manufactured by autoclave processing. The assembly was achieved by Amorphous Interlayer Bonding with PEI.

  4. Continuous stacking computational approach based automated microscope slide scanner

    NASA Astrophysics Data System (ADS)

    Murali, Swetha; Adhikari, Jayesh Vasudeva; Jagannadh, Veerendra Kalyan; Gorthi, Sai Siva

    2018-02-01

    Cost-effective and automated acquisition of whole slide images is a bottleneck for wide-scale deployment of digital pathology. In this article, a computation augmented approach for the development of an automated microscope slide scanner is presented. The realization of a prototype device built using inexpensive off-the-shelf optical components and motors is detailed. The applicability of the developed prototype to clinical diagnostic testing is demonstrated by generating good quality digital images of malaria-infected blood smears. Further, the acquired slide images have been processed to identify and count the number of malaria-infected red blood cells and thereby perform quantitative parasitemia level estimation. The presented prototype would enable cost-effective deployment of slide-based cyto-diagnostic testing in endemic areas.

  5. Applying Adaptive Variables in Computerised Adaptive Testing

    ERIC Educational Resources Information Center

    Triantafillou, Evangelos; Georgiadou, Elissavet; Economides, Anastasios A.

    2007-01-01

    Current research in computerised adaptive testing (CAT) focuses on applications, in small and large scale, that address self assessment, training, employment, teacher professional development for schools, industry, military, assessment of non-cognitive skills, etc. Dynamic item generation tools and automated scoring of complex, constructed…

  6. METHODS FOR THE SPIRAL SALMONELLA MUTAGENICITY ASSAY INCLUDING SPECIALIZED APPLICATIONS

    EPA Science Inventory

    ABSTRACT

    An automated approach to bacterial mutagenicity testing--the spiral Salmonella assay--was developed to simplify testing and to reduce the labor and materials required to generate dose-responsive mutagenicity information. This document provides the reader with an ...

  7. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  8. Automation of Educational Tasks for Academic Radiology.

    PubMed

    Lamar, David L; Richardson, Michael L; Carlson, Blake

    2016-07-01

    The process of education involves a variety of repetitious tasks. We believe that appropriate computer tools can automate many of these chores, and allow both educators and their students to devote a lot more of their time to actual teaching and learning. This paper details tools that we have used to automate a broad range of academic radiology-specific tasks on Mac OS X, iOS, and Windows platforms. Some of the tools we describe here require little expertise or time to use; others require some basic knowledge of computer programming. We used TextExpander (Mac, iOS) and AutoHotKey (Win) for automated generation of text files, such as resident performance reviews and radiology interpretations. Custom statistical calculations were performed using TextExpander and the Python programming language. A workflow for automated note-taking was developed using Evernote (Mac, iOS, Win) and Hazel (Mac). Automated resident procedure logging was accomplished using Editorial (iOS) and Python. We created three variants of a teaching session logger using Drafts (iOS) and Pythonista (iOS). Editorial and Drafts were used to create flashcards for knowledge review. We developed a mobile reference management system for iOS using Editorial. We used the Workflow app (iOS) to automatically generate a text message reminder for daily conferences. Finally, we developed two separate automated workflows-one with Evernote (Mac, iOS, Win) and one with Python (Mac, Win)-that generate simple automated teaching file collections. We have beta-tested these workflows, techniques, and scripts on several of our fellow radiologists. All of them expressed enthusiasm for these tools and were able to use one or more of them to automate their own educational activities. Appropriate computer tools can automate many educational tasks, and thereby allow both educators and their students to devote a lot more of their time to actual teaching and learning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing

    NASA Astrophysics Data System (ADS)

    Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel

    Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.

  10. Requirements-Based Conformance Testing of ARINC 653 Real-Time Operating Systems

    NASA Astrophysics Data System (ADS)

    Maksimov, Andrey

    2010-08-01

    Requirements-based testing is emphasized in avionics certification documents because this strategy has been found to be the most effective at revealing errors. This paper describes the unified requirements-based approach to the creation of conformance test suites for mission-critical systems. The approach uses formal machine-readable specifications of requirements and finite state machine model for test sequences generation on-the-fly. The paper also presents the test system for automated test generation for ARINC 653 services built on this approach. Possible application of the presented approach to various areas of avionics embedded systems testing is discussed.

  11. Combinatorial materials research applied to the development of new surface coatings VII: An automated system for adhesion testing

    NASA Astrophysics Data System (ADS)

    Chisholm, Bret J.; Webster, Dean C.; Bennett, James C.; Berry, Missy; Christianson, David; Kim, Jongsoo; Mayo, Bret; Gubbins, Nathan

    2007-07-01

    An automated, high-throughput adhesion workflow that enables pseudobarnacle adhesion and coating/substrate adhesion to be measured on coating patches arranged in an array format on 4×8in.2 panels was developed. The adhesion workflow consists of the following process steps: (1) application of an adhesive to the coating array; (2) insertion of panels into a clamping device; (3) insertion of aluminum studs into the clamping device and onto coating surfaces, aligned with the adhesive; (4) curing of the adhesive; and (5) automated removal of the aluminum studs. Validation experiments comparing data generated using the automated, high-throughput workflow to data obtained using conventional, manual methods showed that the automated system allows for accurate ranking of relative coating adhesion performance.

  12. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    PubMed

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Using Generative Representations to Evolve Robots. Chapter 1

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2004-01-01

    Recent research has demonstrated the ability of evolutionary algorithms to automatically design both the physical structure and software controller of real physical robots. One of the challenges for these automated design systems is to improve their ability to scale to the high complexities found in real-world problems. Here we claim that for automated design systems to scale in complexity they must use a representation which allows for the hierarchical creation and reuse of modules, which we call a generative representation. Not only is the ability to reuse modules necessary for functional scalability, but it is also valuable for improving efficiency in testing and construction. We then describe an evolutionary design system with a generative representation capable of hierarchical modularity and demonstrate it for the design of locomoting robots in simulation. Finally, results from our experiments show that evolution with our generative representation produces better robots than those evolved with a non-generative representation.

  14. Honeywell Technical Order Transfer Tests.

    DTIC Science & Technology

    1987-06-12

    of simple corrections, a reasonable reproduction of the original could be generated. The quality was not good enough for a production environment. Lack of automated quality control (AQC) tools could account for the errors.

  15. Automated crystallographic system for high-throughput protein structure determination.

    PubMed

    Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F

    2003-07-01

    High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.

  16. Perspectives on bioanalytical mass spectrometry and automation in drug discovery.

    PubMed

    Janiszewski, John S; Liston, Theodore E; Cole, Mark J

    2008-11-01

    The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.

  17. Automated Test Case Generator for Phishing Prevention Using Generative Grammars and Discriminative Methods

    ERIC Educational Resources Information Center

    Palka, Sean

    2015-01-01

    This research details a methodology designed for creating content in support of various phishing prevention tasks including live exercises and detection algorithm research. Our system uses probabilistic context-free grammars (PCFG) and variable interpolation as part of a multi-pass method to create diverse and consistent phishing email content on…

  18. Garment Counting in a Textile Warehouse by Means of a Laser Imaging System

    PubMed Central

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-01-01

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760

  19. Garment counting in a textile warehouse by means of a laser imaging system.

    PubMed

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-04-29

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.

  20. The Science of Home Automation

    NASA Astrophysics Data System (ADS)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  1. AN AUTOMATED SYSTEM FOR PRODUCING UNIFORM SURFACE DEPOSITS OF DRY PARTICLES

    EPA Science Inventory

    A laboratory system has been constructed that uniformly deposits dry particles onto any type of test surface. Devised as a quality assurance tool for the purpose of evaluating surface sampling methods for lead, it also may be used to generate test surfaces for any contaminant ...

  2. Common Ada Programming Support Environment (APSE) Interface Set (CAIS) Implementation Validation Capability (CIVC2)

    DTIC Science & Technology

    1992-06-01

    Paper, Version 2.0, December 1989. [Woodcock90] Gary Woodcock , Automated Generation of Hypertext Documents, CIVC Technical Report (working paper...environment setup, performance testing, assessor testing, and analysis) of the ACEC. A captive scenario example could be developed that would guide the

  3. Representation of research hypotheses

    PubMed Central

    2011-01-01

    Background Hypotheses are now being automatically produced on an industrial scale by computers in biology, e.g. the annotation of a genome is essentially a large set of hypotheses generated by sequence similarity programs; and robot scientists enable the full automation of a scientific investigation, including generation and testing of research hypotheses. Results This paper proposes a logically defined way for recording automatically generated hypotheses in machine amenable way. The proposed formalism allows the description of complete hypotheses sets as specified input and output for scientific investigations. The formalism supports the decomposition of research hypotheses into more specialised hypotheses if that is required by an application. Hypotheses are represented in an operational way – it is possible to design an experiment to test them. The explicit formal description of research hypotheses promotes the explicit formal description of the results and conclusions of an investigation. The paper also proposes a framework for automated hypotheses generation. We demonstrate how the key components of the proposed framework are implemented in the Robot Scientist “Adam”. Conclusions A formal representation of automatically generated research hypotheses can help to improve the way humans produce, record, and validate research hypotheses. Availability http://www.aber.ac.uk/en/cs/research/cb/projects/robotscientist/results/ PMID:21624164

  4. Multimedia abstract generation of intensive care data: the automation of clinical processes through AI methodologies.

    PubMed

    Jordan, Desmond; Rose, Sydney E

    2010-04-01

    Medical errors from communication failures are enormous during the perioperative period of cardiac surgical patients. As caregivers change shifts or surgical patients change location within the hospital, key information is lost or misconstrued. After a baseline cognitive study of information need and caregiver workflow, we implemented an advanced clinical decision support tool of intelligent agents, medical logic modules, and text generators called the "Inference Engine" to summarize individual patient's raw medical data elements into procedural milestones, illness severity, and care therapies. The system generates two displays: 1) the continuum of care, multimedia abstract generation of intensive care data (MAGIC)-an expert system that would automatically generate a physician briefing of a cardiac patient's operative course in a multimodal format; and 2) the isolated point in time, "Inference Engine"-a system that provides a real-time, high-level, summarized depiction of a patient's clinical status. In our studies, system accuracy and efficacy was judged against clinician performance in the workplace. To test the automated physician briefing, "MAGIC," the patient's intraoperative course, was reviewed in the intensive care unit before patient arrival. It was then judged against the actual physician briefing and that given in a cohort of patients where the system was not used. To test the real-time representation of the patient's clinical status, system inferences were judged against clinician decisions. Changes in workflow and situational awareness were assessed by questionnaires and process evaluation. MAGIC provides 200% more information, twice the accuracy, and enhances situational awareness. This study demonstrates that the automation of clinical processes through AI methodologies yields positive results.

  5. On the virtues of automated quantitative structure-activity relationship: the new kid on the block.

    PubMed

    de Oliveira, Marcelo T; Katekawa, Edson

    2018-02-01

    Quantitative structure-activity relationship (QSAR) has proved to be an invaluable tool in medicinal chemistry. Data availability at unprecedented levels through various databases have collaborated to a resurgence in the interest for QSAR. In this context, rapid generation of quality predictive models is highly desirable for hit identification and lead optimization. We showcase the application of an automated QSAR approach, which randomly selects multiple training/test sets and utilizes machine-learning algorithms to generate predictive models. Results demonstrate that AutoQSAR produces models of improved or similar quality to those generated by practitioners in the field but in just a fraction of the time. Despite the potential of the concept to the benefit of the community, the AutoQSAR opportunity has been largely undervalued.

  6. Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools

    NASA Technical Reports Server (NTRS)

    Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory

    2013-01-01

    Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.

  7. Automated peak picking and peak integration in macromolecular NMR spectra using AUTOPSY.

    PubMed

    Koradi, R; Billeter, M; Engeli, M; Güntert, P; Wüthrich, K

    1998-12-01

    A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automated peak picking for NMR spectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking. Copyright 1998 Academic Press.

  8. TH-AB-207A-05: A Fully-Automated Pipeline for Generating CT Images Across a Range of Doses and Reconstruction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  9. Emerging Technologies for the Clinical Microbiology Laboratory

    PubMed Central

    Buchan, Blake W.

    2014-01-01

    SUMMARY In this review we examine the literature related to emerging technologies that will help to reshape the clinical microbiology laboratory. These topics include nucleic acid amplification tests such as isothermal and point-of-care molecular diagnostics, multiplexed panels for syndromic diagnosis, digital PCR, next-generation sequencing, and automation of molecular tests. We also review matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) and electrospray ionization (ESI) mass spectrometry methods and their role in identification of microorganisms. Lastly, we review the shift to liquid-based microbiology and the integration of partial and full laboratory automation that are beginning to impact the clinical microbiology laboratory. PMID:25278575

  10. Interactive specification acquisition via scenarios: A proposal

    NASA Technical Reports Server (NTRS)

    Hall, Robert J.

    1992-01-01

    Some reactive systems are most naturally specified by giving large collections of behavior scenarios. These collections not only specify the behavior of the system, but also provide good test suites for validating the implemented system. Due to the complexity of the systems and the number of scenarios, however, it appears that automated assistance is necessary to make this software development process workable. Interactive Specification Acquisition Tool (ISAT) is a proposed interactive system for supporting the acquisition and maintenance of a formal system specification from scenarios, as well as automatic synthesis of control code and automated test generation. This paper discusses the background, motivation, proposed functions, and implementation status of ISAT.

  11. Design and realization of the compound text-based test questions library management system

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Feng, Lin; Zhao, Xin

    2011-12-01

    The test questions library management system is the essential part of the on-line examination system. The basic demand for which is to deal with compound text including information like images, formulae and create the corresponding Word documents. Having compared with the two current solutions of creating documents, this paper presents a design proposal of Word Automation mechanism based on OLE/COM technology, and discusses the way of Word Automation application in detail and at last provides the operating results of the system which have high reference value in improving the generated efficiency of project documents and report forms.

  12. Use of information and communication technologies for teaching physics at the Technical University

    NASA Astrophysics Data System (ADS)

    Polezhaev, V. D.; Polezhaeva, L. N.; Kamenev, V. V.

    2017-01-01

    The paper discusses the ways to improve methods and algorithms of the automated control of knowledge, approaches to the establishment and effective functioning of electronic teaching complexes, which include tests of a new generation, and their use is not limited control purpose only. Possibilities of computer-based testing system SCIENTIA are presented. This system is a tool to automate the control of knowledge that can be used for the assessment and monitoring of students' knowledge in different types of exams, self-control of students' knowledge, making test materials, creating a unified database of tests on a wide range of subjects etc. Successful operation of informational system is confirmed in practice during the study of the course of physics by students at Technical University.

  13. Distraction or cognitive overload? Using modulations of the autonomic nervous system to discriminate the possible negative effects of advanced assistance system.

    PubMed

    Ruscio, D; Bos, A J; Ciceri, M R

    2017-06-01

    The interaction with Advanced Driver Assistance Systems has several positive implications for road safety, but also some potential downsides such as mental workload and automation complacency. Malleable attentional resources allocation theory describes two possible processes that can generate workload in interaction with advanced assisting devices. The purpose of the present study is to determine if specific analysis of the different modalities of autonomic control of nervous system can be used to discriminate different potential workload processes generated during assisted-driving tasks and automation complacency situations. Thirty-five drivers were tested in a virtual scenario while using head-up advanced warning assistance system. Repeated MANOVA were used to examine changes in autonomic activity across a combination of different user interactions generated by the advanced assistance system: (1) expected take-over request without anticipatory warning; (2) expected take-over request with two-second anticipatory warning; (3) unexpected take-over request with misleading warning; (4) unexpected take-over request without warning. Results shows that analysis of autonomic modulations can discriminate two different resources allocation processes, related to different behavioral performances. The user's interaction that required divided attention under expected situations produced performance enhancement and reciprocally-coupled parasympathetic inhibition with sympathetic activity. At the same time, supervising interactions that generated automation complacency were described specifically by uncoupled sympathetic activation. Safety implications for automated assistance systems developments are considered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Continuous-flow automation and hemolysis index: a crucial combination.

    PubMed

    Lippi, Giuseppe; Plebani, Mario

    2013-04-01

    A paradigm shift has occurred in the role and organization of laboratory diagnostics over the past decades, wherein consolidation or networking of small laboratories into larger factories and point-of-care testing have simultaneously evolved and now seem to favorably coexist. There is now evidence, however, that the growing implementation of continuous-flow automation, especially in closed systems, has not eased the identification of hemolyzed specimens since the integration of preanalytical and analytical workstations would hide them from visual scrutiny, with an inherent risk that unreliable test results may be released to the stakeholders. Along with other technical breakthroughs, the new generation of laboratory instrumentation is increasingly equipped with systems that can systematically and automatically be tested for a broad series of interferences, the so-called serum indices, which also include the hemolysis index. The routine implementation of these technical tools in clinical laboratories equipped with continuous-flow automation carries several advantages and some drawbacks that are discussed in this article.

  15. The Electrolyte Genome project: A big data approach in battery materials discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, Xiaohui; Jain, Anubhav; Rajput, Nav Nidhi

    2015-06-01

    We present a high-throughput infrastructure for the automated calculation of molecular properties with a focus on battery electrolytes. The infrastructure is largely open-source and handles both practical aspects (input file generation, output file parsing, and information management) as well as more complex problems (structure matching, salt complex generation, and failure recovery). Using this infrastructure, we have computed the ionization potential (IP) and electron affinities (EA) of 4830 molecules relevant to battery electrolytes (encompassing almost 55,000 quantum mechanics calculations) at the B3LYP/6-31+G(*) level. We describe automated workflows for computing redox potential, dissociation constant, and salt-molecule binding complex structure generation. We presentmore » routines for automatic recovery from calculation errors, which brings the failure rate from 9.2% to 0.8% for the QChem DFT code. Automated algorithms to check duplication between two arbitrary molecules and structures are described. We present benchmark data on basis sets and functionals on the G2-97 test set; one finding is that a IP/EA calculation method that combines PBE geometry optimization and B3LYP energy evaluation requires less computational cost and yields nearly identical results as compared to a full B3LYP calculation, and could be suitable for the calculation of large molecules. Our data indicates that among the 8 functionals tested, XYGJ-OS and B3LYP are the two best functionals to predict IP/EA with an RMSE of 0.12 and 0.27 eV, respectively. Application of our automated workflow to a large set of quinoxaline derivative molecules shows that functional group effect and substitution position effect can be separated for IP/EA of quinoxaline derivatives, and the most sensitive position is different for IP and EA. Published by Elsevier B.V« less

  16. The Upgrade Programme for the Structural Biology beamlines at the European Synchrotron Radiation Facility - High throughput sample evaluation and automation

    NASA Astrophysics Data System (ADS)

    Theveneau, P.; Baker, R.; Barrett, R.; Beteva, A.; Bowler, M. W.; Carpentier, P.; Caserotto, H.; de Sanctis, D.; Dobias, F.; Flot, D.; Guijarro, M.; Giraud, T.; Lentini, M.; Leonard, G. A.; Mattenet, M.; McCarthy, A. A.; McSweeney, S. M.; Morawe, C.; Nanao, M.; Nurizzo, D.; Ohlsson, S.; Pernot, P.; Popov, A. N.; Round, A.; Royant, A.; Schmid, W.; Snigirev, A.; Surr, J.; Mueller-Dieckmann, C.

    2013-03-01

    Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This "first generation" of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.

  17. Evolution of solid rocket booster component testing

    NASA Technical Reports Server (NTRS)

    Lessey, Joseph A.

    1989-01-01

    The evolution of one of the new generation of test sets developed for the Solid Rocket Booster of the U.S. Space Transportation System. Requirements leading to factory checkout of the test set are explained, including the evolution from manual to semiautomated toward fully automated status. Individual improvements in the built-in test equipment, self-calibration, and software flexibility are addressed, and the insertion of fault detection to improve reliability is discussed.

  18. Automated workflows for modelling chemical fate, kinetics and toxicity.

    PubMed

    Sala Benito, J V; Paini, Alicia; Richarz, Andrea-Nicole; Meinl, Thorsten; Berthold, Michael R; Cronin, Mark T D; Worth, Andrew P

    2017-12-01

    Automation is universal in today's society, from operating equipment such as machinery, in factory processes, to self-parking automobile systems. While these examples show the efficiency and effectiveness of automated mechanical processes, automated procedures that support the chemical risk assessment process are still in their infancy. Future human safety assessments will rely increasingly on the use of automated models, such as physiologically based kinetic (PBK) and dynamic models and the virtual cell based assay (VCBA). These biologically-based models will be coupled with chemistry-based prediction models that also automate the generation of key input parameters such as physicochemical properties. The development of automated software tools is an important step in harmonising and expediting the chemical safety assessment process. In this study, we illustrate how the KNIME Analytics Platform can be used to provide a user-friendly graphical interface for these biokinetic models, such as PBK models and VCBA, which simulates the fate of chemicals in vivo within the body and in vitro test systems respectively. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. [Building bridges toward the 21st century].

    PubMed

    Sasaki, M

    2000-10-01

    Just as Rome was not built in a day, there are few great inventions and discoveries that can be made overnight. There are always historical circumstances behind them. Laboratory Automation is not an exception. With the end of World War II in 1945 as a turning point, a large volume of American medicine was introduced all over Japan, and clinical laboratory testing which was imported at the same time has taken root and matured. As a result, we can now carry out prompt and fully automated laboratory testing second to none at many hospital laboratories. In this paper, I recall the development and summarize the expansion by focusing on clinical laboratory automation as it has developed in the latter half of the 20th century in Japan. I would feel amply rewarded for my efforts if this paper proved helpful to the young generation. The clinical laboratory of the 21st century rests on their shoulders.

  20. Relationship between second-generation frequency doubling technology and standard automated perimetry in patients with glaucoma.

    PubMed

    Zarkovic, Andrea; Mora, Justin; McKelvie, James; Gamble, Greg

    2007-12-01

    The aim of the study was to establish the correlation between visual filed loss as shown by second-generation Frequency Doubling Technology (Humphrey Matrix) and Standard Automated Perimetry (Humphrey Field Analyser) in patients with glaucoma. Also, compared were the test duration and reliability. Forty right eyes from glaucoma patients from a private ophthalmology practice were included in this prospective study. All participants had tests within an 8-month period. Pattern deviation plots and mean deviation were compared to establish the correlation between the two perimetry tests. Overall correlation and correlation between hemifields, quadrants and individual test locations were assessed. Humphrey Field Analyser tests were slightly more reliable (37/40 vs. 34/40 for Matrix)) but overall of longer duration. There was good correlation (0.69) between mean deviations. Superior hemifields and superonasal quadrants had the highest correlation (0.88 [95% CI 0.79, 0.94]). Correlation between individual points was independent of distance from the macula. Generally, the Matrix and Humphrey Field Analyser perimetry correlate well; however, each machine utilizes a different method of analysing data and thus the direct comparison should be made with caution.

  1. Two Different Approaches to Automated Mark Up of Emotions in Text

    NASA Astrophysics Data System (ADS)

    Francisco, Virginia; Hervás, Raqucl; Gervás, Pablo

    This paper presents two different approaches to automated marking up of texts with emotional labels. For the first approach a corpus of example texts previously annotated by human evaluators is mined for an initial assignment of emotional features to words. This results in a List of Emotional Words (LEW) which becomes a useful resource for later automated mark up. The mark up algorithm in this first approach mirrors closely the steps taken during feature extraction, employing for the actual assignment of emotional features a combination of the LEW resource and WordNet for knowledge-based expansion of words not occurring in LEW. The algorithm for automated mark up is tested against new text samples to test its coverage. The second approach mark up texts during their generation. We have a knowledge base which contains the necessary information for marking up the text. This information is related to actions and characters. The algorithm in this case employ the information of the knowledge database and decides the correct emotion for every sentence. The algorithm for automated mark up is tested against four different texts. The results of the two approaches are compared and discussed with respect to three main issues: relative adequacy of each one of the representations used, correctness and coverage of the proposed algorithms, and additional techniques and solutions that may be employed to improve the results.

  2. Modeling of prepregs during automated draping sequences

    NASA Astrophysics Data System (ADS)

    Krogh, Christian; Glud, Jens A.; Jakobsen, Johnny

    2017-10-01

    The behavior of wowen prepreg fabric during automated draping sequences is investigated. A drape tool under development with an arrangement of grippers facilitates the placement of a woven prepreg fabric in a mold. It is essential that the draped configuration is free from wrinkles and other defects. The present study aims at setting up a virtual draping framework capable of modeling the draping process from the initial flat fabric to the final double curved shape and aims at assisting the development of an automated drape tool. The virtual draping framework consists of a kinematic mapping algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping sequence. The model is based on a transient finite element analysis with the material's constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping framework shows a good potential for obtaining a better understanding of the drape process and guide the development of the drape tool. However, results obtained from using the framework on a simple test case indicate that the generation of draping sequences is non-trivial.

  3. Automated Testcase Generation for Numerical Support Functions in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Schnieder, Stefan-Alexander

    2014-01-01

    We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.

  4. On grey levels in random CAPTCHA generation

    NASA Astrophysics Data System (ADS)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  5. pO(2) measurements by phosphorescence quenching: characteristics and applications of an automated system.

    PubMed

    Kerger, Heinz; Groth, Gesine; Kalenka, Armin; Vajkoczy, Peter; Tsai, Amy G; Intaglietta, Marcos

    2003-01-01

    An automated system for pO(2) analysis based upon phosphorescence quenching was tested. The system was calibrated in vitro with capillary samples of saline and blood. Results were compared to a conventional measuring procedure wherein pO(2) was calculated off-line by computer fitting of phosphorescence decay signals. PO(2) measurements obtained by the automated system were correlated (r(2) = 0.98) with readings simultaneously generated by a blood gas analyzer, accuracy being highest in the low (0-20 mm Hg) and medium pO(2) ranges (21-70 mm Hg). Measurements in in vivo studies in the hamster skin-fold preparation were similar to previously reported results. The automated system fits the phosphorescence decay data to a single exponential and allows repeated pO(2) measurements in rapid sequence.

  6. Automated building of organometallic complexes from 3D fragments.

    PubMed

    Foscato, Marco; Venkatraman, Vishwesh; Occhipinti, Giovanni; Alsberg, Bjørn K; Jensen, Vidar R

    2014-07-28

    A method for the automated construction of three-dimensional (3D) molecular models of organometallic species in design studies is described. Molecular structure fragments derived from crystallographic structures and accurate molecular-level calculations are used as 3D building blocks in the construction of multiple molecular models of analogous compounds. The method allows for precise control of stereochemistry and geometrical features that may otherwise be very challenging, or even impossible, to achieve with commonly available generators of 3D chemical structures. The new method was tested in the construction of three sets of active or metastable organometallic species of catalytic reactions in the homogeneous phase. The performance of the method was compared with those of commonly available methods for automated generation of 3D models, demonstrating higher accuracy of the prepared 3D models in general, and, in particular, a much wider range with respect to the kind of chemical structures that can be built automatically, with capabilities far beyond standard organic and main-group chemistry.

  7. Integrated Platform for Expedited Synthesis–Purification–Testing of Small Molecule Libraries

    PubMed Central

    2017-01-01

    The productivity of medicinal chemistry programs can be significantly increased through the introduction of automation, leading to shortened discovery cycle times. Herein, we describe a platform that consolidates synthesis, purification, quantitation, dissolution, and testing of small molecule libraries. The system was validated through the synthesis and testing of two libraries of binders of polycomb protein EED, and excellent correlation of obtained data with results generated through conventional approaches was observed. The fully automated and integrated platform enables batch-supported compound synthesis based on a broad array of chemical transformations with testing in a variety of biochemical assay formats. A library turnaround time of between 24 and 36 h was achieved, and notably, each library synthesis produces sufficient amounts of compounds for further evaluation in secondary assays thereby contributing significantly to the shortening of medicinal chemistry discovery cycles. PMID:28435537

  8. Automated Rocket Propulsion Test Management

    NASA Technical Reports Server (NTRS)

    Walters, Ian; Nelson, Cheryl; Jones, Helene

    2007-01-01

    The Rocket Propulsion Test-Automated Management System provides a central location for managing activities associated with Rocket Propulsion Test Management Board, National Rocket Propulsion Test Alliance, and the Senior Steering Group business management activities. A set of authorized users, both on-site and off-site with regard to Stennis Space Center (SSC), can access the system through a Web interface. Web-based forms are used for user input with generation and electronic distribution of reports easily accessible. Major functions managed by this software include meeting agenda management, meeting minutes, action requests, action items, directives, and recommendations. Additional functions include electronic review, approval, and signatures. A repository/library of documents is available for users, and all items are tracked in the system by unique identification numbers and status (open, closed, percent complete, etc.). The system also provides queries and version control for input of all items.

  9. 78 FR 66039 - Modification of National Customs Automation Program Test Concerning Automated Commercial...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-04

    ... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly... Simplified Entry functionality in the Automated Commercial Environment (ACE). Originally, the test was known...) test concerning Automated Commercial Environment (ACE) Simplified Entry (SE test) functionality is...

  10. Seamless integration of dose-response screening and flow chemistry: efficient generation of structure-activity relationship data of β-secretase (BACE1) inhibitors.

    PubMed

    Werner, Michael; Kuratli, Christoph; Martin, Rainer E; Hochstrasser, Remo; Wechsler, David; Enderle, Thilo; Alanine, Alexander I; Vogel, Horst

    2014-02-03

    Drug discovery is a multifaceted endeavor encompassing as its core element the generation of structure-activity relationship (SAR) data by repeated chemical synthesis and biological testing of tailored molecules. Herein, we report on the development of a flow-based biochemical assay and its seamless integration into a fully automated system comprising flow chemical synthesis, purification and in-line quantification of compound concentration. This novel synthesis-screening platform enables to obtain SAR data on b-secretase (BACE1) inhibitors at an unprecedented cycle time of only 1 h instead of several days. Full integration and automation of industrial processes have always led to productivity gains and cost reductions, and this work demonstrates how applying these concepts to SAR generation may lead to a more efficient drug discovery process. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  12. Automated Generation and Assessment of Autonomous Systems Test Cases

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.

  13. Operational Assessment of Color Vision

    DTIC Science & Technology

    2016-06-20

    evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s

  14. Simulation in production of open rotor propellers: from optimal surface geometry to automated control of mechanical treatment

    NASA Astrophysics Data System (ADS)

    Grinyok, A.; Boychuk, I.; Perelygin, D.; Dantsevich, I.

    2018-03-01

    A complex method of the simulation and production design of open rotor propellers was studied. An end-to-end diagram was proposed for the evaluating, designing and experimental testing the optimal geometry of the propeller surface, for the machine control path generation as well as for simulating the cutting zone force condition and its relationship with the treatment accuracy which was defined by the propeller elastic deformation. The simulation data provided the realization of the combined automated path control of the cutting tool.

  15. Coverage criteria for test case generation using UML state chart diagram

    NASA Astrophysics Data System (ADS)

    Salman, Yasir Dawood; Hashim, Nor Laily; Rejab, Mawarny Md; Romli, Rohaida; Mohd, Haslina

    2017-10-01

    To improve the effectiveness of test data generation during the software test, many studies have focused on the automation of test data generation from UML diagrams. One of these diagrams is the UML state chart diagram. Test cases are generally evaluated according to coverage criteria. However, combinations of multiple criteria are required to achieve better coverage. Different studies used various number and types of coverage criteria in their methods and approaches. The objective of this paper to propose suitable coverage criteria for test case generation using UML state chart diagram especially in handling loops. In order to achieve this objective, this work reviewed previous studies to present the most practical coverage criteria combinations, including all-states, all-transitions, all-transition-pairs, and all-loop-free-paths coverage. Calculation to determine the coverage percentage of the proposed coverage criteria were presented together with an example has they are applied on a UML state chart diagram. This finding would be beneficial in the area of test case generating especially in handling loops in UML state chart diagram.

  16. Automated Run-Time Mission and Dialog Generation

    DTIC Science & Technology

    2007-03-01

    Processing, Social Network Analysis, Simulation, Automated Scenario Generation 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified...9 D. SOCIAL NETWORKS...13 B. MISSION AND DIALOG GENERATION.................................................13 C. SOCIAL NETWORKS

  17. Correction of electronic record for weighing bucket precipitation gauge measurements

    USDA-ARS?s Scientific Manuscript database

    Electronic sensors generate valuable streams of forcing and validation data for hydrologic models, but are often subject to noise, which must be removed as part of model input and testing database development. We developed Automated Precipitation Correction Program (APCP) for weighting bucket preci...

  18. Harnessing scientific literature reports for pharmacovigilance. Prototype software analytical tool development and usability testing.

    PubMed

    Sorbello, Alfred; Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier

    2017-03-22

    We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers' capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. All usability test participants cited the tool's ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool's automated literature search relative to a manual 'all fields' PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction.

  19. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.

  20. Automated Illustration of Patients Instructions

    PubMed Central

    Bui, Duy; Nakamura, Carlos; Bray, Bruce E.; Zeng-Treitler, Qing

    2012-01-01

    A picture can be a powerful communication tool. However, creating pictures to illustrate patient instructions can be a costly and time-consuming task. Building on our prior research in this area, we developed a computer application that automatically converts text to pictures using natural language processing and computer graphics techniques. After iterative testing, the automated illustration system was evaluated using 49 previously unseen cardiology discharge instructions. The completeness of the system-generated illustrations was assessed by three raters using a three-level scale. The average inter-rater agreement for text correctly represented in the pictograph was about 66 percent. Since illustration in this context is intended to enhance rather than replace text, these results support the feasibility of conducting automated illustration. PMID:23304392

  1. Automated spectral classification and the GAIA project

    NASA Technical Reports Server (NTRS)

    Lasala, Jerry; Kurtz, Michael J.

    1995-01-01

    Two dimensional spectral types for each of the stars observed in the global astrometric interferometer for astrophysics (GAIA) mission would provide additional information for the galactic structure and stellar evolution studies, as well as helping in the identification of unusual objects and populations. The classification of the large quantity generated spectra requires that automated techniques are implemented. Approaches for the automatic classification are reviewed, and a metric-distance method is discussed. In tests, the metric-distance method produced spectral types with mean errors comparable to those of human classifiers working at similar resolution. Data and equipment requirements for an automated classification survey, are discussed. A program of auxiliary observations is proposed to yield spectral types and radial velocities for the GAIA-observed stars.

  2. Comparative evaluation of the Cobas Amplicor HIV-1 Monitor Ultrasensitive Test, the new Cobas AmpliPrep/Cobas Amplicor HIV-1 Monitor Ultrasensitive Test and the Versant HIV RNA 3.0 assays for quantitation of HIV-1 RNA in plasma samples.

    PubMed

    Berger, Annemarie; Scherzed, Lina; Stürmer, Martin; Preiser, Wolfgang; Doerr, Hans Wilhelm; Rabenau, Holger Felix

    2005-05-01

    There are several commercially available assays for the quantitation of HIV RNA. A new automated specimen preparation system, the Cobas AmpliPrep, was developed to automate this last part of the PCR. We compared the results obtained by the Roche Cobas Amplicor HIV-1 Monitor Ultrasensitive Test (MCA, manual sample preparation) with those by the Versant HIV-1 RNA 3.0 assay (bDNA). Secondly we compared the MCA with the new Cobas AmpliPrep/Cobas Amplicor HIV Monitor Ultrasensitive Test (CAP/CA, automated specimen preparation) by investigating clinical patient samples and a panel of HIV-1 non-B subtypes. Furthermore, we assessed the assay throughput and workflow (especially hands-on time) for all three assays. Seventy-two percent of the 140 investigated patient samples gave concordant results in the bDNA and MCA assays. The MCA values were regularly higher than the bDNA values. One sample was detected only by the MCA within the linear range of quantification. In contrast, 38 samples with results <50 copies/ml in the MCA showed in the bDNA results between 51 and 1644 copies/ml (mean value 74 copies/ml); 21 of these specimens were shown to have detectable HIV RNA < 50 copies/ml in the MCA assay. The overall agreement between the MCA and the CAP/CA was 94.3% (551/584). The quantification results showed significant correlation, although the CAP/CA generated values slightly lower than those generated by the manual procedure. We found that the CAP/CA produced comparable results with the MCA test in a panel of HIV-1 non-B subtypes. All three assays showed comparable results. The bDNA provides a high sample throughput without the need of full automation. The new CAP/CA provides reliable test results with no HIV-subtype specific influence and releases time for other works in the laboratory; thus it is suitable for routine diagnostic PCR.

  3. Autonomously Generating Operations Sequences for a Mars Rover Using Artificial Intelligence-Based Planning

    NASA Astrophysics Data System (ADS)

    Sherwood, R.; Mutz, D.; Estlin, T.; Chien, S.; Backes, P.; Norris, J.; Tran, D.; Cooper, B.; Rabideau, G.; Mishkin, A.; Maxwell, S.

    2001-07-01

    This article discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from high-level science and engineering activities. This prototype is based on ASPEN, the Automated Scheduling and Planning Environment. This artificial intelligence (AI)-based planning and scheduling system will automatically generate a command sequence that will execute within resource constraints and satisfy flight rules. An automated planning and scheduling system encodes rover design knowledge and uses search and reasoning techniques to automatically generate low-level command sequences while respecting rover operability constraints, science and engineering preferences, environmental predictions, and also adhering to hard temporal constraints. This prototype planning system has been field-tested using the Rocky 7 rover at JPL and will be field-tested on more complex rovers to prove its effectiveness before transferring the technology to flight operations for an upcoming NASA mission. Enabling goal-driven commanding of planetary rovers greatly reduces the requirements for highly skilled rover engineering personnel. This in turn greatly reduces mission operations costs. In addition, goal-driven commanding permits a faster response to changes in rover state (e.g., faults) or science discoveries by removing the time-consuming manual sequence validation process, allowing rapid "what-if" analyses, and thus reducing overall cycle times.

  4. Automated Monitoring with a BSP Fault-Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L.; Herzog, James P.

    2003-01-01

    The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware- and-software system that implements the method and procedure. As used here, asset could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.

  5. Automated lattice data generation

    NASA Astrophysics Data System (ADS)

    Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.

    2018-03-01

    The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  6. Automated Microfluidic Platform for Serial Polymerase Chain Reaction and High-Resolution Melting Analysis.

    PubMed

    Cao, Weidong; Bean, Brian; Corey, Scott; Coursey, Johnathan S; Hasson, Kenton C; Inoue, Hiroshi; Isano, Taisuke; Kanderian, Sami; Lane, Ben; Liang, Hongye; Murphy, Brian; Owen, Greg; Shinoda, Nobuhiko; Zeng, Shulin; Knight, Ivor T

    2016-06-01

    We report the development of an automated genetic analyzer for human sample testing based on microfluidic rapid polymerase chain reaction (PCR) with high-resolution melting analysis (HRMA). The integrated DNA microfluidic cartridge was used on a platform designed with a robotic pipettor system that works by sequentially picking up different test solutions from a 384-well plate, mixing them in the tips, and delivering mixed fluids to the DNA cartridge. A novel image feedback flow control system based on a Canon 5D Mark II digital camera was developed for controlling fluid movement through a complex microfluidic branching network without the use of valves. The same camera was used for measuring the high-resolution melt curve of DNA amplicons that were generated in the microfluidic chip. Owing to fast heating and cooling as well as sensitive temperature measurement in the microfluidic channels, the time frame for PCR and HRMA was dramatically reduced from hours to minutes. Preliminary testing results demonstrated that rapid serial PCR and HRMA are possible while still achieving high data quality that is suitable for human sample testing. © 2015 Society for Laboratory Automation and Screening.

  7. Automated Sequence Generation Process and Software

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    "Automated sequence generation" (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences.

  8. Development Status: Automation Advanced Development Space Station Freedom Electric Power System

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Kish, James A.; Mellor, Pamela A.

    1990-01-01

    Electric power system automation for Space Station Freedom is intended to operate in a loop. Data from the power system is used for diagnosis and security analysis to generate Operations Management System (OMS) requests, which are sent to an arbiter, which sends a plan to a commander generator connected to the electric power system. This viewgraph presentation profiles automation software for diagnosis, scheduling, and constraint interfaces, and simulation to support automation development. The automation development process is diagrammed, and the process of creating Ada and ART versions of the automation software is described.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andrew; Lawrence, Earl

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less

  10. Automating lexical cross-mapping of ICNP to SNOMED CT.

    PubMed

    Kim, Tae Youn

    2016-01-01

    The purpose of this study was to examine the feasibility of automating lexical cross-mapping of a logic-based nursing terminology (ICNP) to SNOMED CT using the Unified Medical Language System (UMLS) maintained by the U.S. National Library of Medicine. A two-stage approach included patterns identification, and application and evaluation of an automated term matching procedure. The performance of the automated procedure was evaluated using a test set against a gold standard (i.e. concept equivalency table) created independently by terminology experts. There were lexical similarities between ICNP diagnostic concepts and SNOMED CT. The automated term matching procedure was reliable as presented in recall of 65%, precision of 79%, accuracy of 82%, F-measure of 0.71 and the area under the receiver operating characteristics (ROC) curve of 0.78 (95% CI 0.73-0.83). When the automated procedure was not able to retrieve lexically matched concepts, it was also unlikely for terminology experts to identify a matched SNOMED CT concept. Although further research is warranted to enhance the automated matching procedure, the combination of cross-maps from UMLS and the automated procedure is useful to generate candidate mappings and thus, assist ongoing maintenance of mappings which is a significant burden to terminology developers.

  11. Classification of Automated Search Traffic

    NASA Astrophysics Data System (ADS)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  12. Quality Assurance and T&E of Inertial Systems for RLV Mission

    NASA Astrophysics Data System (ADS)

    Sathiamurthi, S.; Thakur, Nayana; Hari, K.; Peter, Pilmy; Biju, V. S.; Mani, K. S.

    2017-12-01

    This work describes the quality assurance and Test and Evaluation (T&E) activities carried out for the inertial systems flown successfully in India's first reusable launch vehicle technology demonstrator hypersonic experiment mission. As part of reliability analysis, failure mode effect and criticality analysis and derating analysis were carried out in the initial design phase, findings presented to design review forums and the recommendations were implemented. T&E plan was meticulously worked out and presented to respective forums for review and implementation. Test data analysis, health parameter plotting and test report generation was automated and these automations significantly reduced the time required for these activities and helped to avoid manual errors. Further, T&E cycle is optimized without compromising on quality aspects. These specific measures helped to achieve zero defect delivery of inertial systems for RLV application.

  13. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.

  14. Automated high-dose rate brachytherapy treatment planning for a single-channel vaginal cylinder applicator

    NASA Astrophysics Data System (ADS)

    Zhou, Yuhong; Klages, Peter; Tan, Jun; Chi, Yujie; Stojadinovic, Strahinja; Yang, Ming; Hrycushko, Brian; Medin, Paul; Pompos, Arnold; Jiang, Steve; Albuquerque, Kevin; Jia, Xun

    2017-06-01

    High dose rate (HDR) brachytherapy treatment planning is conventionally performed manually and/or with aids of preplanned templates. In general, the standard of care would be elevated by conducting an automated process to improve treatment planning efficiency, eliminate human error, and reduce plan quality variations. Thus, our group is developing AutoBrachy, an automated HDR brachytherapy planning suite of modules used to augment a clinical treatment planning system. This paper describes our proof-of-concept module for vaginal cylinder HDR planning that has been fully developed. After a patient CT scan is acquired, the cylinder applicator is automatically segmented using image-processing techniques. The target CTV is generated based on physician-specified treatment depth and length. Locations of the dose calculation point, apex point and vaginal surface point, as well as the central applicator channel coordinates, and the corresponding dwell positions are determined according to their geometric relationship with the applicator and written to a structure file. Dwell times are computed through iterative quadratic optimization techniques. The planning information is then transferred to the treatment planning system through a DICOM-RT interface. The entire process was tested for nine patients. The AutoBrachy cylindrical applicator module was able to generate treatment plans for these cases with clinical grade quality. Computation times varied between 1 and 3 min on an Intel Xeon CPU E3-1226 v3 processor. All geometric components in the automated treatment plans were generated accurately. The applicator channel tip positions agreed with the manually identified positions with submillimeter deviations and the channel orientations between the plans agreed within less than 1 degree. The automatically generated plans obtained clinically acceptable quality.

  15. Generative Representations for Automated Design of Robots

    NASA Technical Reports Server (NTRS)

    Homby, Gregory S.; Lipson, Hod; Pollack, Jordan B.

    2007-01-01

    A method of automated design of complex, modular robots involves an evolutionary process in which generative representations of designs are used. The term generative representations as used here signifies, loosely, representations that consist of or include algorithms, computer programs, and the like, wherein encoded designs can reuse elements of their encoding and thereby evolve toward greater complexity. Automated design of robots through synthetic evolutionary processes has already been demonstrated, but it is not clear whether genetically inspired search algorithms can yield designs that are sufficiently complex for practical engineering. The ultimate success of such algorithms as tools for automation of design depends on the scaling properties of representations of designs. A nongenerative representation (one in which each element of the encoded design is used at most once in translating to the design) scales linearly with the number of elements. Search algorithms that use nongenerative representations quickly become intractable (search times vary approximately exponentially with numbers of design elements), and thus are not amenable to scaling to complex designs. Generative representations are compact representations and were devised as means to circumvent the above-mentioned fundamental restriction on scalability. In the present method, a robot is defined by a compact programmatic form (its generative representation) and the evolutionary variation takes place on this form. The evolutionary process is an iterative one, wherein each cycle consists of the following steps: 1. Generative representations are generated in an evolutionary subprocess. 2. Each generative representation is a program that, when compiled, produces an assembly procedure. 3. In a computational simulation, a constructor executes an assembly procedure to generate a robot. 4. A physical-simulation program tests the performance of a simulated constructed robot, evaluating the performance according to a fitness criterion to yield a figure of merit that is fed back into the evolutionary subprocess of the next iteration. In comparison with prior approaches to automated evolutionary design of robots, the use of generative representations offers two advantages: First, a generative representation enables the reuse of components in regular and hierarchical ways and thereby serves a systematic means of creating more complex modules out of simpler ones. Second, the evolved generative representation may capture intrinsic properties of the design problem, so that variations in the representations move through the design space more effectively than do equivalent variations in a nongenerative representation. This method has been demonstrated by using it to design some robots that move, variously, by walking, rolling, or sliding. Some of the robots were built (see figure). Although these robots are very simple, in comparison with robots designed by humans, their structures are more regular, modular, hierarchical, and complex than are those of evolved designs of comparable functionality synthesized by use of nongenerative representations.

  16. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-14

    ... Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of... Automated Commercial Environment (ACE). The test's participant selection criteria are modified to reflect... (NCAP) test concerning Automated Commercial Environment (ACE) Simplified Entry functionality (Simplified...

  17. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  18. Automated Software Development Workstation (ASDW)

    NASA Technical Reports Server (NTRS)

    Fridge, Ernie

    1990-01-01

    Software development is a serious bottleneck in the construction of complex automated systems. An increase of the reuse of software designs and components has been viewed as a way to relieve this bottleneck. One approach to achieving software reusability is through the development and use of software parts composition systems. A software parts composition system is a software development environment comprised of a parts description language for modeling parts and their interfaces, a catalog of existing parts, a composition editor that aids a user in the specification of a new application from existing parts, and a code generator that takes a specification and generates an implementation of a new application in a target language. The Automated Software Development Workstation (ASDW) is an expert system shell that provides the capabilities required to develop and manipulate these software parts composition systems. The ASDW is now in Beta testing at the Johnson Space Center. Future work centers on responding to user feedback for capability and usability enhancement, expanding the scope of the software lifecycle that is covered, and in providing solutions to handling very large libraries of reusable components.

  19. The Successful Development of an Automated Rendezvous and Capture (AR&C) System for the National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    Roe, Fred D.; Howard, Richard T.

    2003-01-01

    During the 1990's, the Marshall Space Flight Center (MSFC) conducted pioneering research in the development of an automated rendezvous and capture/docking (AR&C) system for U.S. space vehicles. Development and demonstration of a rendezvous sensor was identified early in the AR&C Program as the critical enabling technology that allows automated proximity operations and docking. A first generation rendezvous sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on STS-87 and STS-95, proving the concept of a video- based sensor. A ground demonstration of the entire system and software was successfully tested. Advances in both video and signal processing technologies and the lessons learned from the two successful flight experiments provided a baseline for the development, by the MSFC, of a new generation of video based rendezvous sensor. The Advanced Video Guidance Sensor (AGS) has greatly increased performance and additional capability for longer-range operation with a new target designed as a direct replacement for existing ISS hemispherical reflectors.

  20. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  1. DG-AMMOS: a new tool to generate 3d conformation of small molecules using distance geometry and automated molecular mechanics optimization for in silico screening.

    PubMed

    Lagorce, David; Pencheva, Tania; Villoutreix, Bruno O; Miteva, Maria A

    2009-11-13

    Discovery of new bioactive molecules that could enter drug discovery programs or that could serve as chemical probes is a very complex and costly endeavor. Structure-based and ligand-based in silico screening approaches are nowadays extensively used to complement experimental screening approaches in order to increase the effectiveness of the process and facilitating the screening of thousands or millions of small molecules against a biomolecular target. Both in silico screening methods require as input a suitable chemical compound collection and most often the 3D structure of the small molecules has to be generated since compounds are usually delivered in 1D SMILES, CANSMILES or in 2D SDF formats. Here, we describe the new open source program DG-AMMOS which allows the generation of the 3D conformation of small molecules using Distance Geometry and their energy minimization via Automated Molecular Mechanics Optimization. The program is validated on the Astex dataset, the ChemBridge Diversity database and on a number of small molecules with known crystal structures extracted from the Cambridge Structural Database. A comparison with the free program Balloon and the well-known commercial program Omega generating the 3D of small molecules is carried out. The results show that the new free program DG-AMMOS is a very efficient 3D structure generator engine. DG-AMMOS provides fast, automated and reliable access to the generation of 3D conformation of small molecules and facilitates the preparation of a compound collection prior to high-throughput virtual screening computations. The validation of DG-AMMOS on several different datasets proves that generated structures are generally of equal quality or sometimes better than structures obtained by other tested methods.

  2. Production Maintenance Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason Gabler, David Skinner

    2005-11-01

    PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less

  3. Validating EHR documents: automatic schematron generation using archetypes.

    PubMed

    Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph

    2014-01-01

    The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations.

  4. Harnessing Scientific Literature Reports for Pharmacovigilance

    PubMed Central

    Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier

    2017-01-01

    Summary Objectives We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers’ capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. Methods A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. Results All usability test participants cited the tool’s ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool’s automated literature search relative to a manual ‘all fields’ PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Conclusions Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction. PMID:28326432

  5. A Computational Framework for Automation of Point Defect Calculations

    NASA Astrophysics Data System (ADS)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration

    A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.

  6. Digital test signal generation: An accurate SNR calibration approach for the DSN

    NASA Technical Reports Server (NTRS)

    Gutierrez-Luaces, Benito O.

    1993-01-01

    In support of the on-going automation of the Deep Space Network (DSN) a new method of generating analog test signals with accurate signal-to-noise ratio (SNR) is described. High accuracy is obtained by simultaneous generation of digital noise and signal spectra at the desired bandwidth (base-band or bandpass). The digital synthesis provides a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy is dependent on test integration time and limited only by the system quantization noise (0.02 dB). The monitor and control as well as signal-processing programs reside in a personal computer (PC). Commands are transmitted to properly configure the specially designed high-speed digital hardware. The prototype can generate either two data channels modulated or not on a subcarrier, or one QPSK channel, or a residual carrier with one biphase data channel. The analog spectrum generated is on the DC to 10 MHz frequency range. These spectra may be up-converted to any desired frequency without loss on the characteristics of the SNR provided. Test results are presented.

  7. AUTOMATED LITERATURE PROCESSING HANDLING AND ANALYSIS SYSTEM--FIRST GENERATION.

    ERIC Educational Resources Information Center

    Redstone Scientific Information Center, Redstone Arsenal, AL.

    THE REPORT PRESENTS A SUMMARY OF THE DEVELOPMENT AND THE CHARACTERISTICS OF THE FIRST GENERATION OF THE AUTOMATED LITERATURE PROCESSING, HANDLING AND ANALYSIS (ALPHA-1) SYSTEM. DESCRIPTIONS OF THE COMPUTER TECHNOLOGY OF ALPHA-1 AND THE USE OF THIS AUTOMATED LIBRARY TECHNIQUE ARE PRESENTED. EACH OF THE SUBSYSTEMS AND MODULES NOW IN OPERATION ARE…

  8. A computational framework for automation of point defect calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  9. A computational framework for automation of point defect calculations

    DOE PAGES

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...

    2017-01-13

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  10. The 3D Euler solutions using automated Cartesian grid generation

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.

    1993-01-01

    Viewgraphs on 3-dimensional Euler solutions using automated Cartesian grid generation are presented. Topics covered include: computational fluid dynamics (CFD) and the design cycle; Cartesian grid strategy; structured body fit; grid generation; prolate spheroid; and ONERA M6 wing.

  11. Practical interpretation of CYP2D6 haplotypes: Comparison and integration of automated and expert calling.

    PubMed

    Ruaño, Gualberto; Kocherla, Mohan; Graydon, James S; Holford, Theodore R; Makowski, Gregory S; Goethe, John W

    2016-05-01

    We describe a population genetic approach to compare samples interpreted with expert calling (EC) versus automated calling (AC) for CYP2D6 haplotyping. The analysis represents 4812 haplotype calls based on signal data generated by the Luminex xMap analyzers from 2406 patients referred to a high-complexity molecular diagnostics laboratory for CYP450 testing. DNA was extracted from buccal swabs. We compared the results of expert calls (EC) and automated calls (AC) with regard to haplotype number and frequency. The ratio of EC to AC was 1:3. Haplotype frequencies from EC and AC samples were convergent across haplotypes, and their distribution was not statistically different between the groups. Most duplications required EC, as only expansions with homozygous or hemizygous haplotypes could be automatedly called. High-complexity laboratories can offer equivalent interpretation to automated calling for non-expanded CYP2D6 loci, and superior interpretation for duplications. We have validated scientific expert calling specified by scoring rules as standard operating procedure integrated with an automated calling algorithm. The integration of EC with AC is a practical strategy for CYP2D6 clinical haplotyping. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Automated classification of articular cartilage surfaces based on surface texture.

    PubMed

    Stachowiak, G P; Stachowiak, G W; Podsiadlo, P

    2006-11-01

    In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.

  13. Generators and automated generator systems for production and on-line injections of pet radiopharmaceuticals

    NASA Astrophysics Data System (ADS)

    Shimchuk, G.; Shimchuk, Gr; Pakhomov, G.; Avalishvili, G.; Zavrazhnov, G.; Polonsky-Byslaev, I.; Fedotov, A.; Polozov, P.

    2017-01-01

    One of the prospective directions of PET development is using generator positron radiating nuclides [1,2]. Introduction of this technology is financially promising, since it does not require expensive special accelerator and radiochemical laboratory in the medical institution, which considerably reduces costs of PET diagnostics and makes it available to more patients. POZITOM-PRO RPC LLC developed and produced an 82Sr-82Rb generator, an automated injection system, designed for automatic and fully-controlled injections of 82RbCl produced by this generator, automated radiopharmaceutical synthesis units based on generated 68Ga produced using a domestically-manufactured 68Ge-68Ga generator for preparing two pharmaceuticals: Ga-68-DOTA-TATE and Vascular Ga-68.

  14. Designing Anticancer Peptides by Constructive Machine Learning.

    PubMed

    Grisoni, Francesca; Neuhaus, Claudia S; Gabernet, Gisela; Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-04-21

    Constructive (generative) machine learning enables the automated generation of novel chemical structures without the need for explicit molecular design rules. This study presents the experimental application of such a deep machine learning model to design membranolytic anticancer peptides (ACPs) de novo. A recurrent neural network with long short-term memory cells was trained on α-helical cationic amphipathic peptide sequences and then fine-tuned with 26 known ACPs by transfer learning. This optimized model was used to generate unique and novel amino acid sequences. Twelve of the peptides were synthesized and tested for their activity on MCF7 human breast adenocarcinoma cells and selectivity against human erythrocytes. Ten of these peptides were active against cancer cells. Six of the active peptides killed MCF7 cancer cells without affecting human erythrocytes with at least threefold selectivity. These results advocate constructive machine learning for the automated design of peptides with desired biological activities. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Using Apex To Construct CPM-GOMS Models

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2006-01-01

    process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.

  16. Automated procedures for sizing aerospace vehicle structures /SAVES/

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Blackburn, C. L.; Dixon, S. C.

    1972-01-01

    Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.

  17. Cerebellum engages in automation of verb-generation skill.

    PubMed

    Yang, Zhi; Wu, Paula; Weng, Xuchu; Bandettini, Peter A

    2014-03-01

    Numerous studies have shown cerebellar involvement in item-specific association, a form of explicit learning. However, very few have demonstrated cerebellar participation in automation of non-motor cognitive tasks. Applying fMRI to a repeated verb-generation task, we sought to distinguish cerebellar involvement in learning of item-specific noun-verb association and automation of verb generation skill. The same set of nouns was repeated in six verb-generation blocks so that subjects practiced generating verbs for the nouns. The practice was followed by a novel block with a different set of nouns. The cerebellar vermis (IV/V) and the right cerebellar lobule VI showed decreased activation following practice; activation in the right cerebellar Crus I was significantly lower in the novel challenge than in the initial verb-generation task. Furthermore, activation in this region during well-practiced blocks strongly correlated with improvement of behavioral performance in both the well-practiced and the novel blocks, suggesting its role in the learning of general mental skills not specific to the practiced noun-verb pairs. Therefore, the cerebellum processes both explicit verbal associative learning and automation of cognitive tasks. Different cerebellar regions predominate in this processing: lobule VI during the acquisition of item-specific association, and Crus I during automation of verb-generation skills through practice.

  18. "First generation" automated DNA sequencing technology.

    PubMed

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  19. Life Testing and Diagnostics of a Planar Out-of-Core Thermionic Converter

    NASA Astrophysics Data System (ADS)

    Thayer, Kevin L.; Ramalingam, Mysore L.; Young, Timothy J.; Lamp, Thomas R.

    1994-07-01

    This paper details the design and performance of an automated computer data acquisition system for a planar, out-of-core thermionic converter with CVD rhenium electrodes. The output characteristics of this converter have been mapped for emitter temperatures ranging from approximately 1700K to 2000K, and life testing of the converter is presently being performed at the design point of operation. An automated data acquisition system has been constructed to facilitate the collection of current density versus output voltage (J-V) and temperature data from the converter throughout the life test. This system minimizes the amount of human interaction necessary during the lifetest to measure and archive the data and present it in a usable form. The task was accomplished using a Macintosh Ilcx computer, two multiple-purpose interface boards, a digital oscilloscope, a sweep generator, and National Instrument's LabVIEW application software package.

  20. Extraction and Analysis of Display Data

    NASA Technical Reports Server (NTRS)

    Land, Chris; Moye, Kathryn

    2008-01-01

    The Display Audit Suite is an integrated package of software tools that partly automates the detection of Portable Computer System (PCS) Display errors. [PCS is a lap top computer used onboard the International Space Station (ISS).] The need for automation stems from the large quantity of PCS displays (6,000+, with 1,000,000+ lines of command and telemetry data). The Display Audit Suite includes data-extraction tools, automatic error detection tools, and database tools for generating analysis spread sheets. These spread sheets allow engineers to more easily identify many different kinds of possible errors. The Suite supports over 40 independent analyses, 16 NASA Tech Briefs, November 2008 and complements formal testing by being comprehensive (all displays can be checked) and by revealing errors that are difficult to detect via test. In addition, the Suite can be run early in the development cycle to find and correct errors in advance of testing.

  1. Space plasma research

    NASA Technical Reports Server (NTRS)

    Comfort, R. H.; Horwitz, J. L.

    1986-01-01

    Temperature and density analysis in the Automated Analysis Program (for the global empirical model) were modified to use flow velocities produced by the flow velocity analysis. Revisions were started to construct an interactive version of the technique for temperature and density analysis used in the automated analysis program. A sutdy of ion and electron heating at high altitudes in the outer plasmasphere was initiated. Also the analysis of the electron gun experiments on SCATHA were extended to include eclipse operations in order to test a hypothesis that there are interactions between the 50 to 100 eV beam and spacecraft generated photoelectrons. The MASSCOMP software to be used in taking and displaying data in the two-ion plasma experiment was tested and is now working satisfactorily. Papers published during the report period are listed.

  2. Oxidation State Specific Generation of Arsines from Methylated Arsenicals Based on L- Cysteine Treatment in Buffered Media for Speciation Analysis by Hydride Generation - Automated Cryotrapping - Gas Chromatography-Atomic Absorption Spectrometry with the Multiatomizer

    PubMed Central

    Matoušek, Tomáš; Hernández-Zavala, Araceli; Svoboda, Milan; Langrová, Lenka; Adair, Blakely M.; Drobná, Zuzana; Thomas, David J.; Stýblo, Miroslav; Dědina, Jiří

    2008-01-01

    An automated system for hydride generation - cryotrapping- gas chromatography - atomic absorption spectrometry with the multiatomizer is described. Arsines are preconcentrated and separated in a Chromosorb filled U-tube. An automated cryotrapping unit, employing nitrogen gas formed upon heating in the detection phase for the displacement of the cooling liquid nitrogen, has been developed. The conditions for separation of arsines in a Chromosorb filled U-tube have been optimized. A complete separation of signals from arsine, methylarsine, dimethylarsine, and trimethylarsine has been achieved within a 60 s reading window. The limits of detection for methylated arsenicals tested were 4 ng l−1. Selective hydride generation is applied for the oxidation state specific speciation analysis of inorganic and methylated arsenicals. The arsines are generated either exclusively from trivalent or from both tri- and pentavalent inorganic and methylated arsenicals depending on the presence of L-cysteine as a prereductant and/or reaction modifier. A TRIS buffer reaction medium is proposed to overcome narrow optimum concentration range observed for the L-cysteine modified reaction in HCl medium. The system provides uniform peak area sensitivity for all As species. Consequently, the calibration with a single form of As is possible. This method permits a high-throughput speciation analysis of metabolites of inorganic arsenic in relatively complex biological matrices such as cell culture systems without sample pretreatment, thus preserving the distribution of tri- and pentavalent species. PMID:18521190

  3. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  4. Operational Test and Evaluation Handbook for Aircrew Training Devices. Volume I. Planning and Management.

    DTIC Science & Technology

    1982-02-01

    of i, nd to (! Lvel op an awareness of the T&E roles and responsioi Ii ties Viir~dte various Air Force organizations involved in the T&EC process... mathematical models to determine controller messages and issue controller messages using computer generated speech. AUTOMATED PERFORMANCE ALERTS: Signals

  5. "Clustering" Documents Automatically to Support Scoping Reviews of Research: A Case Study

    ERIC Educational Resources Information Center

    Stansfield, Claire; Thomas, James; Kavanagh, Josephine

    2013-01-01

    Background: Scoping reviews of research help determine the feasibility and the resource requirements of conducting a systematic review, and the potential to generate a description of the literature quickly is attractive. Aims: To test the utility and applicability of an automated clustering tool to describe and group research studies to improve…

  6. Background to the development process, Automated Residential Energy Standard (ARES) in support of proposed interim energy conservation voluntary performance standards for new non-federal residential buildings: Volume 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the development and testing of a set of recommendations generated to serve as a primary basis for the Congressionally-mandated residential standard. This report treats only the residential building recommendations.

  7. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  8. Automation of Cassini Support Imaging Uplink Command Development

    NASA Technical Reports Server (NTRS)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  9. Requirements-Driven Log Analysis Extended Abstract

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2012-01-01

    Imagine that you are tasked to help a project improve their testing effort. In a realistic scenario it will quickly become clear, that having an impact is diffcult. First of all, it will likely be a challenge to suggest an alternative approach which is significantly more automated and/or more effective than current practice. The reality is that an average software system has a complex input/output behavior. An automated testing approach will have to auto-generate test cases, each being a pair (i; o) consisting of a test input i and an oracle o. The test input i has to be somewhat meaningful, and the oracle o can be very complicated to compute. Second, even in case where some testing technology has been developed that might improve current practice, it is then likely difficult to completely change the current behavior of the testing team unless the technique is obviously superior and does everything already done by existing technology. So is there an easier way to incorporate formal methods-based approaches than the full edged test revolution? Fortunately the answer is affirmative. A relatively simple approach is to benefit from possibly already existing logging infrastructure, which after all is part of most systems put in production. A log is a sequence of events, generated by special log recording statements, most often manually inserted in the code by the programmers. An event can be considered as a data record: a mapping from field names to values. We can analyze such a log using formal methods, for example checking it against a formal specification. This separates running the system for analyzing its behavior. It is not meant as an alternative to testing since it does not address the important in- put generation problem. However, it offers a solution which testing teams might accept since it has low impact on the existing process. A single person might be assigned to perform such log analysis, compared to the entire testing team changing behavior.

  10. Autogen Version 2.0

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    Version 2.0 of the autogen software has been released. "Autogen" (automated sequence generation) signifies both a process and software used to implement the process of automated generation of sequences of commands in a standard format for uplink to spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes.

  11. 76 FR 34246 - Automated Commercial Environment (ACE); Announcement of National Customs Automation Program Test...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-13

    ... CBP with authority to conduct limited test programs or procedures designed to evaluate planned... aspects of this test, including the design, conduct and implementation of the test, in order to determine... Environment (ACE); Announcement of National Customs Automation Program Test of Automated Procedures for In...

  12. An automated methodology development. [software design for combat simulation

    NASA Technical Reports Server (NTRS)

    Hawley, L. R.

    1985-01-01

    The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.

  13. Natural Language Interface for Safety Certification of Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2011-01-01

    Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.

  14. How does a collision warning system shape driver's brake response time? The influence of expectancy and automation complacency on real-life emergency braking.

    PubMed

    Ruscio, Daniele; Ciceri, Maria Rita; Biassoni, Federica

    2015-04-01

    Brake Reaction Time (BRT) is an important parameter for road safety. Previous research has shown that drivers' expectations can impact RT when facing hazardous situations, but driving with advanced driver assistance systems, can change the way BRT are considered. The interaction with a collision warning system can help faster more efficient responses, but at the same time can require a monitoring task and evaluation process that may lead to automation complacency. The aims of the present study are to test in a real-life setting whether automation compliancy can be generated by a collision warning system and what component of expectancy can impact the different tasks involved in an assisted BRT process. More specifically four component of expectancy were investigated: presence/absence of anticipatory information, previous direct experience, reliability of the device, and predictability of the hazard determined by repeated use of the warning system. Results supply indication on perception time and mental elaboration of the collision warning system alerts. In particular reliable warning quickened the decision making process, misleading warnings generated automation complacency slowing visual search for hazard detection, lack of directed experienced slowed the overall response while unexpected failure of the device lead to inattentional blindness and potential pseudo-accidents with surprise obstacle intrusion. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A taxonomy and discussion of software attack technologies

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2005-03-01

    Software is a complex thing. It is not an engineering artifact that springs forth from a design by simply following software coding rules; creativity and the human element are at the heart of the process. Software development is part science, part art, and part craft. Design, architecture, and coding are equally important activities and in each of these activities, errors may be introduced that lead to security vulnerabilities. Therefore, inevitably, errors enter into the code. Some of these errors are discovered during testing; however, some are not. The best way to find security errors, whether they are introduced as part of the architecture development effort or coding effort, is to automate the security testing process to the maximum extent possible and add this class of tools to the tools available, which aids in the compilation process, testing, test analysis, and software distribution. Recent technological advances, improvements in computer-generated forces (CGFs), and results in research in information assurance and software protection indicate that we can build a semi-intelligent software security testing tool. However, before we can undertake the security testing automation effort, we must understand the scope of the required testing, the security failures that need to be uncovered during testing, and the characteristics of the failures. Therefore, we undertook the research reported in the paper, which is the development of a taxonomy and a discussion of software attacks generated from the point of view of the security tester with the goal of using the taxonomy to guide the development of the knowledge base for the automated security testing tool. The representation for attacks and threat cases yielded by this research captures the strategies, tactics, and other considerations that come into play during the planning and execution of attacks upon application software. The paper is organized as follows. Section one contains an introduction to our research and a discussion of the motivation for our work. Section two contains a presents our taxonomy of software attacks and a discussion of the strategies employed and general weaknesses exploited for each attack. Section three contains a summary and suggestions for further research.

  16. Automated ammunition logistics for the Crusader program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Speaks, D.M.; Kring, C.T.; Lloyd, P.D.

    1997-03-01

    The US Army`s next generation artillery system is called the Crusader. A self-propelled howitzer and a resupply vehicle constitute the Crusader system, which will be designed for improved mobility, increased firepower, and greater survivability than current generation vehicles. The Army`s Project Manager, Crusader, gave Oak Ridge National Laboratory (ORNL) the task of developing and demonstrating a concept for the resupply vehicle. The resupply vehicle is intended to sustain the howitzer with ammunition and fuel and will significantly increase capabilities over those of current resupply vehicles. Ammunition is currently processed and transferred almost entirely by hand. ORNL identified and evaluated variousmore » concepts for automated upload, processing, storage, docking and delivery. Each of the critical technologies was then developed separately and demonstrated on discrete test platforms. An integrated technology demonstrator, incorporating each of the individual technology components to realistically simulate performance of the selected vehicle concept, was developed and successfully demonstrated for the Army.« less

  17. A machine learning approach for automated wide-range frequency tagging analysis in embedded neuromonitoring systems.

    PubMed

    Montagna, Fabio; Buiatti, Marco; Benatti, Simone; Rossi, Davide; Farella, Elisabetta; Benini, Luca

    2017-10-01

    EEG is a standard non-invasive technique used in neural disease diagnostics and neurosciences. Frequency-tagging is an increasingly popular experimental paradigm that efficiently tests brain function by measuring EEG responses to periodic stimulation. Recently, frequency-tagging paradigms have proven successful with low stimulation frequencies (0.5-6Hz), but the EEG signal is intrinsically noisy in this frequency range, requiring heavy signal processing and significant human intervention for response estimation. This limits the possibility to process the EEG on resource-constrained systems and to design smart EEG based devices for automated diagnostic. We propose an algorithm for artifact removal and automated detection of frequency tagging responses in a wide range of stimulation frequencies, which we test on a visual stimulation protocol. The algorithm is rooted on machine learning based pattern recognition techniques and it is tailored for a new generation parallel ultra low power processing platform (PULP), reaching performance of more that 90% accuracy in the frequency detection even for very low stimulation frequencies (<1Hz) with a power budget of 56mW. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Towards Automating Clinical Assessments: A Survey of the Timed Up and Go (TUG)

    PubMed Central

    Sprint, Gina; Cook, Diane; Weeks, Douglas

    2016-01-01

    Older adults often suffer from functional impairments that affect their ability to perform everyday tasks. To detect the onset and changes in abilities, healthcare professionals administer standardized assessments. Recently, technology has been utilized to complement these clinical assessments to gain a more objective and detailed view of functionality. In the clinic and at home, technology is able to provide more information about patient performance and reduce subjectivity in outcome measures. The timed up and go (TUG) test is one such assessment recently instrumented with technology in several studies, yielding promising results towards the future of automating clinical assessments. Potential benefits of technological TUG implementations include additional performance parameters, generated reports, and the ability to be self-administered in the home. In this paper, we provide an overview of the TUG test and technologies utilized for TUG instrumentation. We then critically review the technological advancements and follow up with an evaluation of the benefits and limitations of each approach. Finally, we analyze the gaps in the implementations and discuss challenges for future research towards automated, self-administered assessment in the home. PMID:25594979

  19. An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments

    NASA Technical Reports Server (NTRS)

    Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.

    2015-01-01

    The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.

  20. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  1. Comparison of Disk Diffusion, VITEK 2, and Broth Microdilution Antimicrobial Susceptibility Test Results for Unusual Species of Enterobacteriaceae▿

    PubMed Central

    Stone, Nimalie D.; O'Hara, Caroline M.; Williams, Portia P.; McGowan, John E.; Tenover, Fred C.

    2007-01-01

    We compared the antimicrobial susceptibility testing results generated by disk diffusion and the VITEK 2 automated system with the results of the Clinical and Laboratory Standards Institute (CLSI) broth microdilution (BMD) reference method for 61 isolates of unusual species of Enterobacteriaceae. The isolates represented 15 genera and 26 different species, including Buttiauxella, Cedecea, Kluyvera, Leminorella, and Yokenella. Antimicrobial agents included aminoglycosides, carbapenems, cephalosporins, fluoroquinolones, penicillins, and trimethoprim-sulfamethoxazole. CLSI interpretative criteria for Enterobacteriaceae were used. Of the 12 drugs tested by BMD and disk diffusion, 10 showed >95% categorical agreement (CA). CA was lower for ampicillin (80.3%) and cefazolin (77.0%). There were 3 very major errors (all with cefazolin), 1 major error (also with cefazolin), and 26 minor errors. Of the 40 isolates (representing 12 species) that could be identified with the VITEK 2 database, 36 were identified correctly to species level, 1 was identified to genus level only, and 3 were reported as unidentified. VITEK 2 generated MIC results for 42 (68.8%) of 61 isolates, but categorical interpretations (susceptible, intermediate, and resistant) were provided for only 22. For the 17 drugs tested by both BMD and VITEK 2, essential agreement ranged from 80.9 to 100% and CA ranged from 68.2% (ampicillin) to 100%; thirteen drugs exhibited 100% CA. In summary, disk diffusion provides a reliable alternative to BMD for testing of unusual Enterobacteriaceae, some of which cannot be tested, or produce incorrect results, by automated methods. PMID:17135429

  2. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.

  3. Automated sequence analysis and editing software for HIV drug resistance testing.

    PubMed

    Struck, Daniel; Wallis, Carole L; Denisov, Gennady; Lambert, Christine; Servais, Jean-Yves; Viana, Raquel V; Letsoalo, Esrom; Bronze, Michelle; Aitken, Sue C; Schuurman, Rob; Stevens, Wendy; Schmit, Jean Claude; Rinke de Wit, Tobias; Perez Bercoff, Danielle

    2012-05-01

    Access to antiretroviral treatment in resource-limited-settings is inevitably paralleled by the emergence of HIV drug resistance. Monitoring treatment efficacy and HIV drugs resistance testing are therefore of increasing importance in resource-limited settings. Yet low-cost technologies and procedures suited to the particular context and constraints of such settings are still lacking. The ART-A (Affordable Resistance Testing for Africa) consortium brought together public and private partners to address this issue. To develop an automated sequence analysis and editing software to support high throughput automated sequencing. The ART-A Software was designed to automatically process and edit ABI chromatograms or FASTA files from HIV-1 isolates. The ART-A Software performs the basecalling, assigns quality values, aligns query sequences against a set reference, infers a consensus sequence, identifies the HIV type and subtype, translates the nucleotide sequence to amino acids and reports insertions/deletions, premature stop codons, ambiguities and mixed calls. The results can be automatically exported to Excel to identify mutations. Automated analysis was compared to manual analysis using a panel of 1624 PR-RT sequences generated in 3 different laboratories. Discrepancies between manual and automated sequence analysis were 0.69% at the nucleotide level and 0.57% at the amino acid level (668,047 AA analyzed), and discordances at major resistance mutations were recorded in 62 cases (4.83% of differences, 0.04% of all AA) for PR and 171 (6.18% of differences, 0.03% of all AA) cases for RT. The ART-A Software is a time-sparing tool for pre-analyzing HIV and viral quasispecies sequences in high throughput laboratories and highlighting positions requiring attention. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Automatic structured grid generation using Gridgen (some restrictions apply)

    NASA Technical Reports Server (NTRS)

    Chawner, John R.; Steinbrenner, John P.

    1995-01-01

    The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.

  5. Software Quality Assurance and Verification for the MPACT Library Generation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea

    This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less

  6. Calibrating soil respiration measures with a dynamic flux apparatus using artificial soil media of varying porosity

    Treesearch

    John R. Butnor; Kurt H. Johnsen

    2004-01-01

    Measurement of soil respiration to quantify ecosystem carbon cyclingrequires absolute, not relative, estimates of soil CO2 efflux. We describe a novel, automated efflux apparatus that can be used to test the accuracy of chamber-based soil respiration measurements by generating known CO2 fluxes. Artificial soil is supported...

  7. Automation of checkout for the shuttle operations era

    NASA Technical Reports Server (NTRS)

    Anderson, J. A.; Hendrickson, K. O.

    1985-01-01

    The Space Shuttle checkout is different from its Apollo predecessor. The complexity of the hardware, the shortened turnaround time, and the software that performs ground checkout are outlined. Generating new techniques and standards for software development and the management structure to control it are implemented. The utilization of computer systems for vehicle testing is high lighted.

  8. Third generation design solar cell module LSA task 5, large scale production

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A total of twelve (12) preproduction modules were constructed, tested, and delivered. A concept to the frame assembly was designed and proven to be quite reliable. This frame design, as well as the rest of the assembly, was designed with future high volume production and the use of automated equipment in mind.

  9. AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART III: PROGRAM DOCUMENTATION

    EPA Science Inventory

    The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to proces...

  10. 75 FR 29788 - Self-Regulatory Organizations; NASDAQ OMX PHLX, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-27

    ... generate and submit option quotations electronically through AUTOM in eligible options to which such SQT is... from the Exchange to generate and submit option quotations electronically through AUTOM in eligible...

  11. Imaging Flash Lidar for Safe Landing on Solar System Bodies and Spacecraft Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Bulyshev, Alexander E.; Brewster, Paul F.; Carrion, William A; Pierrottet, Diego F.; Hines, Glenn D.; Petway, Larry B.; Barnes, Bruce W.; Noe, Anna M.

    2015-01-01

    NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of the landing from about 1 kilometer to 500 meters above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16,000 pixels range images with 7 centimeters precision, at 20 Hertz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications.

  12. Automated Mapping of Flood Events in the Mississippi River Basin Utilizing NASA Earth Observations

    NASA Technical Reports Server (NTRS)

    Bartkovich, Mercedes; Baldwin-Zook, Helen Blue; Cruz, Dashiell; McVey, Nicholas; Ploetz, Chris; Callaway, Olivia

    2017-01-01

    The Mississippi River Basin is the fourth largest drainage basin in the world, and is susceptible to multi-level flood events caused by heavy precipitation, snow melt, and changes in water table levels. Conducting flood analysis during periods of disaster is a challenging endeavor for NASA's Short-term Prediction Research and Transition Center (SPoRT), Federal Emergency Management Agency (FEMA), and the U.S. Geological Survey's Hazards Data Distribution Systems (USGS HDDS) due to heavily-involved research and lack of manpower. During this project, an automated script was generated that performs high-level flood analysis to relieve the workload for end-users. The script incorporated Landsat 8 Operational Land Imager (OLI) tiles and utilized computer-learning techniques to generate accurate water extent maps. The script referenced the Moderate Resolution Imaging Spectroradiometer (MODIS) land-water mask to isolate areas of flood induced waters. These areas were overlaid onto the National Land Cover Database's (NLCD) land cover data, the Oak Ridge National Laboratory's LandScan data, and Homeland Infrastructure Foundation-Level Data (HIFLD) to determine the classification of areas impacted and the population density affected by flooding. The automated algorithm was initially tested on the September 2016 flood event that occurred in Upper Mississippi River Basin, and was then further tested on multiple flood events within the Mississippi River Basin. This script allows end users to create their own flood probability and impact maps for disaster mitigation and recovery efforts.

  13. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated blood grouping and antibody test system...

  14. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood grouping and antibody test system...

  15. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood grouping and antibody test system...

  16. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated blood grouping and antibody test system...

  17. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated blood grouping and antibody test system...

  18. SU-E-J-92: Validating Dose Uncertainty Estimates Produced by AUTODIRECT, An Automated Program to Evaluate Deformable Image Registration Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less

  19. Human Factors Assessment: The Passive Final Approach Spacing Tool (pFAST) Operational Evaluation

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Sanford, Beverly D.

    1998-01-01

    Automation to assist air traffic controllers in the current terminal and en route air traff ic environments is being developed at Ames Research Center in conjunction with the Federal Aviation Administration. This automation, known collectively as the Center-TRACON Automation System (CTAS), provides decision- making assistance to air traffic controllers through computer-generated advisories. One of the CTAS tools developed specifically to assist terminal area air traffic controllers is the Passive Final Approach Spacing Tool (pFAST). An operational evaluation of PFAST was conducted at the Dallas/Ft. Worth, Texas, Terminal Radar Approach Control (TRACON) facility. Human factors data collected during the test describe the impact of the automation upon the air traffic controller in terms of perceived workload and acceptance. Results showed that controller self-reported workload was not significantly increased or reduced by the PFAST automation; rather, controllers reported that the levels of workload remained primarily the same. Controller coordination and communication data were analyzed, and significant differences in the nature of controller coordination were found. Controller acceptance ratings indicated that PFAST was acceptable. This report describes the human factors data and results from the 1996 Operational Field Evaluation of Passive FAST.

  20. SU-E-T-398: Feasibility of Automated Tools for Robustness Evaluation of Advanced Photon and Proton Techniques in Oropharyngeal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Liang, X; Kalbasi, A

    2014-06-01

    Purpose: Advanced radiotherapy (RT) techniques such as proton pencil beam scanning (PBS) and photon-based volumetric modulated arc therapy (VMAT) have dosimetric advantages in the treatment of head and neck malignancies. However, anatomic or alignment changes during treatment may limit robustness of PBS and VMAT plans. We assess the feasibility of automated deformable registration tools for robustness evaluation in adaptive PBS and VMAT RT of oropharyngeal cancer (OPC). Methods: We treated 10 patients with bilateral OPC with advanced RT techniques and obtained verification CT scans with physician-reviewed target and OAR contours. We generated 3 advanced RT plans for each patient: protonmore » PBS plan using 2 posterior oblique fields (2F), proton PBS plan using an additional third low-anterior field (3F), and a photon VMAT plan using 2 arcs (Arc). For each of the planning techniques, we forward calculated initial (Ini) plans on the verification scans to create verification (V) plans. We extracted DVH indicators based on physician-generated contours for 2 target and 14 OAR structures to investigate the feasibility of two automated tools (contour propagation (CP) and dose deformation (DD)) as surrogates for routine clinical plan robustness evaluation. For each verification scan, we compared DVH indicators of V, CP and DD plans in a head-to-head fashion using Student's t-test. Results: We performed 39 verification scans; each patient underwent 3 to 6 verification scan. We found no differences in doses to target or OAR structures between V and CP, V and DD, and CP and DD plans across all patients (p > 0.05). Conclusions: Automated robustness evaluation tools, CP and DD, accurately predicted dose distributions of verification (V) plans using physician-generated contours. These tools may be further developed as a potential robustness screening tool in the workflow for adaptive treatment of OPC using advanced RT techniques, reducing the need for physician-generated contours.« less

  1. Electrochemical Detection in Stacked Paper Networks.

    PubMed

    Liu, Xiyuan; Lillehoj, Peter B

    2015-08-01

    Paper-based electrochemical biosensors are a promising technology that enables rapid, quantitative measurements on an inexpensive platform. However, the control of liquids in paper networks is generally limited to a single sample delivery step. Here, we propose a simple method to automate the loading and delivery of liquid samples to sensing electrodes on paper networks by stacking multiple layers of paper. Using these stacked paper devices (SPDs), we demonstrate a unique strategy to fully immerse planar electrodes by aqueous liquids via capillary flow. Amperometric measurements of xanthine oxidase revealed that electrochemical sensors on four-layer SPDs generated detection signals up to 75% higher compared with those on single-layer paper devices. Furthermore, measurements could be performed with minimal user involvement and completed within 30 min. Due to its simplicity, enhanced automation, and capability for quantitative measurements, stacked paper electrochemical biosensors can be useful tools for point-of-care testing in resource-limited settings. © 2015 Society for Laboratory Automation and Screening.

  2. Web Navigation Sequences Automation in Modern Websites

    NASA Astrophysics Data System (ADS)

    Montoto, Paula; Pan, Alberto; Raposo, Juan; Bellas, Fernando; López, Javier

    Most today’s web sources are designed to be used by humans, but they do not provide suitable interfaces for software programs. That is why a growing interest has arisen in so-called web automation applications that are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. Previous proposals assume models for generating and reproducing navigation sequences that are not able to correctly deal with new websites using technologies such as AJAX: on one hand existing systems only allow recording simple navigation actions and, on the other hand, they are unable to detect the end of the effects caused by an user action. In this paper, we propose a set of new techniques to record and execute web navigation sequences able to deal with all the complexity existing in AJAX-based web sites. We also present an exhaustive evaluation of the proposed techniques that shows very promising results.

  3. Automated analysis of connected speech reveals early biomarkers of Parkinson's disease in patients with rapid eye movement sleep behaviour disorder.

    PubMed

    Hlavnička, Jan; Čmejla, Roman; Tykalová, Tereza; Šonka, Karel; Růžička, Evžen; Rusz, Jan

    2017-02-02

    For generations, the evaluation of speech abnormalities in neurodegenerative disorders such as Parkinson's disease (PD) has been limited to perceptual tests or user-controlled laboratory analysis based upon rather small samples of human vocalizations. Our study introduces a fully automated method that yields significant features related to respiratory deficits, dysphonia, imprecise articulation and dysrhythmia from acoustic microphone data of natural connected speech for predicting early and distinctive patterns of neurodegeneration. We compared speech recordings of 50 subjects with rapid eye movement sleep behaviour disorder (RBD), 30 newly diagnosed, untreated PD patients and 50 healthy controls, and showed that subliminal parkinsonian speech deficits can be reliably captured even in RBD patients, which are at high risk of developing PD or other synucleinopathies. Thus, automated vocal analysis should soon be able to contribute to screening and diagnostic procedures for prodromal parkinsonian neurodegeneration in natural environments.

  4. Helping System Engineers Bridge the Peaks

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen

    2014-01-01

    In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.

  5. Rockwell Automation PLC-5 Lands Stennis Space Center with a Reliable, Flexible Control System

    NASA Technical Reports Server (NTRS)

    Epperson, Dave

    2003-01-01

    Ever since the first rocket was launched, people have been infatuated with the vast and unchartered frontier of space. Whether it's visiting a space center or watching a shuttle launch, people are waiting to see what will be discovered next. And even though orbiting the Earth or taking soil samples form the Moon now seems effortless, decades worth of behind-the-scenes work have helped the U.S. space program get to this point. Even today, NASA must take every precaution to ensure equipment is up to the endeavor of setting foot on the moon. As part of the initial push to put the first man on the moon, NASA established the John C. Stennis Space Center, Hancock County, Mississippi in 1961 for space engine propulsion system development. Today, Stennis has three major test complexes where engine and component testing is carried out and integrated into full motion systems for space shuttles and vehicles as well as secondary testing facilities. With different products being tested throughout the facilities, Stennis was in need of an automation system that could link the operations. By integrating a control system based on a Rockwell Automation's flexible and reliable PLC-5 controller, Stennis was able to implement projects more efficiently and focus its efforts on getting the next generation of products ready for space.

  6. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  7. Using Automated Theorem Provers to Certify Auto-Generated Aerospace Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann

    2004-01-01

    We describe a system for the automated certification of safety properties of NASA software. The system uses Hoare-style program verification technology to generate proof obligations which are then processed by an automated first-order theorem prover (ATP). For full automation, however, the obligations must be aggressively preprocessed and simplified We describe the unique requirements this places on the ATP and demonstrate how the individual simplification stages, which are implemented by rewriting, influence the ability of the ATP to solve the proof tasks. Experiments on more than 25,000 tasks were carried out using Vampire, Spass, and e-setheo.

  8. High-Throughput Screening of Na(V)1.7 Modulators Using a Giga-Seal Automated Patch Clamp Instrument.

    PubMed

    Chambers, Chris; Witton, Ian; Adams, Cathryn; Marrington, Luke; Kammonen, Juha

    2016-03-01

    Voltage-gated sodium (Na(V)) channels have an essential role in the initiation and propagation of action potentials in excitable cells, such as neurons. Of these channels, Na(V)1.7 has been indicated as a key channel for pain sensation. While extensive efforts have gone into discovering novel Na(V)1.7 modulating compounds for the treatment of pain, none has reached the market yet. In the last two years, new compound screening technologies have been introduced, which may speed up the discovery of such compounds. The Sophion Qube(®) is a next-generation 384-well giga-seal automated patch clamp (APC) screening instrument, capable of testing thousands of compounds per day. By combining high-throughput screening and follow-up compound testing on the same APC platform, it should be possible to accelerate the hit-to-lead stage of ion channel drug discovery and help identify the most interesting compounds faster. Following a period of instrument beta-testing, a Na(V)1.7 high-throughput screen was run with two Pfizer plate-based compound subsets. In total, data were generated for 158,000 compounds at a median success rate of 83%, which can be considered high in APC screening. In parallel, IC50 assay validation and protocol optimization was completed with a set of reference compounds to understand how the IC50 potencies generated on the Qube correlate with data generated on the more established Sophion QPatch(®) APC platform. In summary, the results presented here demonstrate that the Qube provides a comparable but much faster approach to study Na(V)1.7 in a robust and reliable APC assay for compound screening.

  9. Automated Dissolution for Enteric-Coated Aspirin Tablets: A Case Study for Method Transfer to a RoboDis II.

    PubMed

    Ibrahim, Sarah A; Martini, Luigi

    2014-08-01

    Dissolution method transfer is a complicated yet common process in the pharmaceutical industry. With increased pharmaceutical product manufacturing and dissolution acceptance requirements, dissolution testing has become one of the most labor-intensive quality control testing methods. There is an increased trend for automation in dissolution testing, particularly for large pharmaceutical companies to reduce variability and increase personnel efficiency. There is no official guideline for dissolution testing method transfer from a manual, semi-automated, to automated dissolution tester. In this study, a manual multipoint dissolution testing procedure for an enteric-coated aspirin tablet was transferred effectively and reproducibly to a fully automated dissolution testing device, RoboDis II. Enteric-coated aspirin samples were used as a model formulation to assess the feasibility and accuracy of media pH change during continuous automated dissolution testing. Several RoboDis II parameters were evaluated to ensure the integrity and equivalency of dissolution method transfer from a manual dissolution tester. This current study provides a systematic outline for the transfer of the manual dissolution testing protocol to an automated dissolution tester. This study further supports that automated dissolution testers compliant with regulatory requirements and similar to manual dissolution testers facilitate method transfer. © 2014 Society for Laboratory Automation and Screening.

  10. Automation of diagnostic genetic testing: mutation detection by cyclic minisequencing.

    PubMed

    Alagrund, Katariina; Orpana, Arto K

    2014-01-01

    The rising role of nucleic acid testing in clinical decision making is creating a need for efficient and automated diagnostic nucleic acid test platforms. Clinical use of nucleic acid testing sets demands for shorter turnaround times (TATs), lower production costs and robust, reliable methods that can easily adopt new test panels and is able to run rare tests in random access principle. Here we present a novel home-brew laboratory automation platform for diagnostic mutation testing. This platform is based on the cyclic minisequecing (cMS) and two color near-infrared (NIR) detection. Pipetting is automated using Tecan Freedom EVO pipetting robots and all assays are performed in 384-well micro plate format. The automation platform includes a data processing system, controlling all procedures, and automated patient result reporting to the hospital information system. We have found automated cMS a reliable, inexpensive and robust method for nucleic acid testing for a wide variety of diagnostic tests. The platform is currently in clinical use for over 80 mutations or polymorphisms. Additionally to tests performed from blood samples, the system performs also epigenetic test for the methylation of the MGMT gene promoter, and companion diagnostic tests for analysis of KRAS and BRAF gene mutations from formalin fixed and paraffin embedded tumor samples. Automation of genetic test reporting is found reliable and efficient decreasing the work load of academic personnel.

  11. Supporting skill acquisition in cochlear implant surgery through virtual reality simulation.

    PubMed

    Copson, Bridget; Wijewickrema, Sudanthi; Zhou, Yun; Piromchai, Patorn; Briggs, Robert; Bailey, James; Kennedy, Gregor; O'Leary, Stephen

    2017-03-01

    To evaluate the effectiveness of a virtual reality (VR) temporal bone simulator in training cochlear implant surgery. We compared the performance of 12 otolaryngology registrars conducting simulated cochlear implant surgery before (pre-test) and after (post-tests) receiving training on a VR temporal bone surgery simulator with automated performance feedback. The post-test tasks were two temporal bones, one that was a mirror image of the temporal bone used as a pre-test and the other, a novel temporal bone. Participant performances were assessed by an otologist with a validated cochlear implant competency assessment tool. Structural damage was derived from an automatically generated simulator metric and compared between time points. Wilcoxon signed-rank test showed that there was a significant improvement with a large effect size in the total performance scores between the pre-test (PT) and both the first and second post-tests (PT1, PT2) (PT-PT1: P = 0.007, r = 0.78, PT-PT2: P = 0.005, r = 0.82). The results of the study indicate that VR simulation with automated guidance can effectively be used to train surgeons in training complex temporal bone surgeries such as cochlear implantation.

  12. Production and quality assurance automation in the Goddard Space Flight Center Flight Dynamics Facility

    NASA Technical Reports Server (NTRS)

    Chapman, K. B.; Cox, C. M.; Thomas, C. W.; Cuevas, O. O.; Beckman, R. M.

    1994-01-01

    The Flight Dynamics Facility (FDF) at the NASA Goddard Space Flight Center (GSFC) generates numerous products for NASA-supported spacecraft, including the Tracking and Data Relay Satellites (TDRS's), the Hubble Space Telescope (HST), the Extreme Ultraviolet Explorer (EUVE), and the space shuttle. These products include orbit determination data, acquisition data, event scheduling data, and attitude data. In most cases, product generation involves repetitive execution of many programs. The increasing number of missions supported by the FDF has necessitated the use of automated systems to schedule, execute, and quality assure these products. This automation allows the delivery of accurate products in a timely and cost-efficient manner. To be effective, these systems must automate as many repetitive operations as possible and must be flexible enough to meet changing support requirements. The FDF Orbit Determination Task (ODT) has implemented several systems that automate product generation and quality assurance (QA). These systems include the Orbit Production Automation System (OPAS), the New Enhanced Operations Log (NEOLOG), and the Quality Assurance Automation Software (QA Tool). Implementation of these systems has resulted in a significant reduction in required manpower, elimination of shift work and most weekend support, and improved support quality, while incurring minimal development cost. This paper will present an overview of the concepts used and experiences gained from the implementation of these automation systems.

  13. Automated Assessment in Massive Open Online Courses

    ERIC Educational Resources Information Center

    Ivaniushin, Dmitrii A.; Shtennikov, Dmitrii G.; Efimchick, Eugene A.; Lyamin, Andrey V.

    2016-01-01

    This paper describes an approach to use automated assessments in online courses. Open edX platform is used as the online courses platform. The new assessment type uses Scilab as learning and solution validation tool. This approach allows to use automated individual variant generation and automated solution checks without involving the course…

  14. Automated generation of individually customized visualizations of diagnosis-specific medical information using novel techniques of information extraction

    NASA Astrophysics Data System (ADS)

    Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang

    2005-04-01

    Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.

  15. Development and automation of a test of impulse control in zebrafish

    PubMed Central

    Parker, Matthew O.; Ife, Dennis; Ma, Jun; Pancholi, Mahesh; Smeraldi, Fabrizio; Straw, Chris; Brennan, Caroline H.

    2013-01-01

    Deficits in impulse control (difficulties in inhibition of a pre-potent response) are fundamental to a number of psychiatric disorders, but the molecular and cellular basis is poorly understood. Zebrafish offer a very useful model for exploring these mechanisms, but there is currently a lack of validated procedures for measuring impulsivity in fish. In mammals, impulsivity can be measured by examining rates of anticipatory responding in the 5-choice serial reaction time task (5-CSRTT), a continuous performance task where the subject is reinforced upon accurate detection of a briefly presented light in one of five distinct spatial locations. This paper describes the development of a fully-integrated automated system for testing impulsivity in adult zebrafish. We outline the development of our image analysis software and its integration with National Instruments drivers and actuators to produce the system. We also describe an initial validation of the system through a one-generation screen of chemically mutagenized zebrafish, where the testing parameters were optimized. PMID:24133417

  16. Building "e-rater"® Scoring Models Using Machine Learning Methods. Research Report. ETS RR-16-04

    ERIC Educational Resources Information Center

    Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A.

    2016-01-01

    The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…

  17. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  18. An Automated Method for Navigation Assessment for Earth Survey Sensors Using Island Targets

    NASA Technical Reports Server (NTRS)

    Patt, F. S.; Woodward, R. H.; Gregg, W. W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalogue of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean colour sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  19. Feasibility of automated dropsize distributions from holographic data using digital image processing techniques. [particle diameter measurement technique

    NASA Technical Reports Server (NTRS)

    Feinstein, S. P.; Girard, M. A.

    1979-01-01

    An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.

  20. Automated navigation assessment for earth survey sensors using island targets

    NASA Technical Reports Server (NTRS)

    Patt, Frederick S.; Woodward, Robert H.; Gregg, Watson W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalog of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean color sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  1. Correction of microplate location effects improves performance of the thrombin generation test

    PubMed Central

    2013-01-01

    Background Microplate-based thrombin generation test (TGT) is widely used as clinical measure of global hemostatic potential and it becomes a useful tool for control of drug potency and quality by drug manufactures. However, the convenience of the microtiter plate technology can be deceiving: microplate assays are prone to location-based variability in different parts of the microtiter plate. Methods In this report, we evaluated the well-to-well consistency of the TGT variant specifically applied to the quantitative detection of the thrombogenic substances in the immune globulin product. We also studied the utility of previously described microplate layout designs in the TGT experiment. Results Location of the sample on the microplate (location effect) contributes to the variability of TGT measurements. Use of manual pipetting techniques and applications of the TGT to the evaluation of procoagulant enzymatic substances are especially sensitive. The effects were not sensitive to temperature or choice of microplate reader. Smallest location effects were observed with automated dispenser-based calibrated thrombogram instrument. Even for an automated instrument, the use of calibration curve resulted in up to 30% bias in thrombogenic potency assignment. Conclusions Use of symmetrical version of the strip-plot layout was demonstrated to help to minimize location artifacts even under the worst-case conditions. Strip-plot layouts are required for quantitative thrombin-generation based bioassays used in the biotechnological field. PMID:23829491

  2. Correction of microplate location effects improves performance of the thrombin generation test.

    PubMed

    Liang, Yideng; Woodle, Samuel A; Shibeko, Alexey M; Lee, Timothy K; Ovanesov, Mikhail V

    2013-07-05

    Microplate-based thrombin generation test (TGT) is widely used as clinical measure of global hemostatic potential and it becomes a useful tool for control of drug potency and quality by drug manufactures. However, the convenience of the microtiter plate technology can be deceiving: microplate assays are prone to location-based variability in different parts of the microtiter plate. In this report, we evaluated the well-to-well consistency of the TGT variant specifically applied to the quantitative detection of the thrombogenic substances in the immune globulin product. We also studied the utility of previously described microplate layout designs in the TGT experiment. Location of the sample on the microplate (location effect) contributes to the variability of TGT measurements. Use of manual pipetting techniques and applications of the TGT to the evaluation of procoagulant enzymatic substances are especially sensitive. The effects were not sensitive to temperature or choice of microplate reader. Smallest location effects were observed with automated dispenser-based calibrated thrombogram instrument. Even for an automated instrument, the use of calibration curve resulted in up to 30% bias in thrombogenic potency assignment. Use of symmetrical version of the strip-plot layout was demonstrated to help to minimize location artifacts even under the worst-case conditions. Strip-plot layouts are required for quantitative thrombin-generation based bioassays used in the biotechnological field.

  3. Automated flow quantification in valvular heart disease based on backscattered Doppler power analysis: implementation on matrix-array ultrasound imaging systems.

    PubMed

    Buck, Thomas; Hwang, Shawn M; Plicht, Björn; Mucci, Ronald A; Hunold, Peter; Erbel, Raimund; Levine, Robert A

    2008-06-01

    Cardiac ultrasound imaging systems are limited in the noninvasive quantification of valvular regurgitation due to indirect measurements and inaccurate hemodynamic assumptions. We recently demonstrated that the principle of integration of backscattered acoustic Doppler power times velocity can be used for flow quantification in valvular regurgitation directly at the vena contracta of a regurgitant flow jet. We now aimed to accomplish implementation of automated Doppler power flow analysis software on a standard cardiac ultrasound system utilizing novel matrix-array transducer technology with detailed description of system requirements, components and software contributing to the system. This system based on a 3.5 MHz, matrix-array cardiac ultrasound scanner (Sonos 5500, Philips Medical Systems) was validated by means of comprehensive experimental signal generator trials, in vitro flow phantom trials and in vivo testing in 48 patients with mitral regurgitation of different severity and etiology using magnetic resonance imaging (MRI) for reference. All measurements displayed good correlation to the reference values, indicating successful implementation of automated Doppler power flow analysis on a matrix-array ultrasound imaging system. Systematic underestimation of effective regurgitant orifice areas >0.65 cm(2) and volumes >40 ml was found due to currently limited Doppler beam width that could be readily overcome by the use of new generation 2D matrix-array technology. Automated flow quantification in valvular heart disease based on backscattered Doppler power can be fully implemented on board a routinely used matrix-array ultrasound imaging systems. Such automated Doppler power flow analysis of valvular regurgitant flow directly, noninvasively, and user independent overcomes the practical limitations of current techniques.

  4. Generating Code Review Documentation for Auto-Generated Mission-Critical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2009-01-01

    Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.

  5. An Analysis of Automated Solutions for the Certification and Accreditation of Navy Medicine Information Assets

    DTIC Science & Technology

    2005-09-01

    discovery of network security threats and vulnerabilities will be done by doing penetration testing during the C&A process. This can be done on a...2.1.1; Appendix E, J COBR -1 Protection of Backup and Restoration Assets Availability 1.3.1; 2.1.3; 2.1.7; 3.1; 4.3; Appendix J, M CODB-2 Data... discovery , inventory, scanning and loading of C&A information in its central database, (2) automatic generation of the SRTM , (3) automatic generation

  6. Developing a Learning Algorithm-Generated Empirical Relaxer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Wayne; Kallman, Josh; Toreja, Allen

    2016-03-30

    One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.

  7. Measuring Performance with Library Automated Systems.

    ERIC Educational Resources Information Center

    OFarrell, John P.

    2000-01-01

    Investigates the capability of three library automated systems to generate some of the datasets necessary to form the ISO (International Standards Organization) standard on performance measurement within libraries, based on research in Liverpool John Moores University (United Kingdom). Concludes that the systems are weak in generating the…

  8. Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2: Initial Operational Test and Evaluation Report

    DTIC Science & Technology

    2015-05-01

    Director, Operational Test and Evaluation Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial...Operational Test and Evaluation Report May 2015 This report on the Department of Defense (DOD) Automated Biometric Identification System...COVERED - 4. TITLE AND SUBTITLE Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial Operational Test

  9. Agar Disk Diffusion and Automated Microbroth Dilution Produce Similar Antimicrobial Susceptibility Testing Results for Salmonella Serotypes Newport, Typhimurium, and 4,5,12:i-, But Differ in Economic Cost

    PubMed Central

    Cummings, Kevin J.; Warnick, Lorin D.; Schukken, Ynte H.; Siler, Julie D.; Gröhn, Yrjo T.; Davis, Margaret A.; Besser, Tom E.; Wiedmann, Martin

    2011-01-01

    Abstract Data generated using different antimicrobial testing methods often have to be combined, but the equivalence of such results is difficult to assess. Here we compared two commonly used antimicrobial susceptibility testing methods, automated microbroth dilution and agar disk diffusion, for 8 common drugs, using 222 Salmonella isolates of serotypes Newport, Typhimurium, and 4,5,12:i-, which had been isolated from clinical salmonellosis cases among cattle and humans. Isolate classification corresponded well between tests, with 95% overall category agreement. Test results were significantly negatively correlated, and Spearman's correlation coefficients ranged from −0.98 to −0.38. Using Cox's proportional hazards model we determined that for most drugs, a 1 mm increase in zone diameter resulted in an estimated 20%–40% increase in the hazard of growth inhibition. However, additional parameters such as isolation year or serotype often impacted the hazard of growth inhibition as well. Comparison of economical feasibility showed that agar disk diffusion is clearly more cost-effective if the average sample throughput is small but that both methods are comparable at high sample throughput. In conclusion, for the Salmonella serotypes and antimicrobial drugs analyzed here, antimicrobial susceptibility data generated based on either test are qualitatively very comparable, and the current published break points for both methods are in excellent agreement. Economic feasibility clearly depends on the specific laboratory settings, and disk diffusion might be an attractive alternative for certain applications such as surveillance studies. PMID:21877930

  10. Validating a UAV artificial intelligence control system using an autonomous test case generator

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Huber, Justin

    2013-05-01

    The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.

  11. Robo-Lector - a novel platform for automated high-throughput cultivations in microtiter plates with high information content.

    PubMed

    Huber, Robert; Ritter, Daniel; Hering, Till; Hillmer, Anne-Kathrin; Kensy, Frank; Müller, Carsten; Wang, Le; Büchs, Jochen

    2009-08-01

    In industry and academic research, there is an increasing demand for flexible automated microfermentation platforms with advanced sensing technology. However, up to now, conventional platforms cannot generate continuous data in high-throughput cultivations, in particular for monitoring biomass and fluorescent proteins. Furthermore, microfermentation platforms are needed that can easily combine cost-effective, disposable microbioreactors with downstream processing and analytical assays. To meet this demand, a novel automated microfermentation platform consisting of a BioLector and a liquid-handling robot (Robo-Lector) was sucessfully built and tested. The BioLector provides a cultivation system that is able to permanently monitor microbial growth and the fluorescence of reporter proteins under defined conditions in microtiter plates. Three examplary methods were programed on the Robo-Lector platform to study in detail high-throughput cultivation processes and especially recombinant protein expression. The host/vector system E. coli BL21(DE3) pRhotHi-2-EcFbFP, expressing the fluorescence protein EcFbFP, was hereby investigated. With the method 'induction profiling' it was possible to conduct 96 different induction experiments (varying inducer concentrations from 0 to 1.5 mM IPTG at 8 different induction times) simultaneously in an automated way. The method 'biomass-specific induction' allowed to automatically induce cultures with different growth kinetics in a microtiter plate at the same biomass concentration, which resulted in a relative standard deviation of the EcFbFP production of only +/- 7%. The third method 'biomass-specific replication' enabled to generate equal initial biomass concentrations in main cultures from precultures with different growth kinetics. This was realized by automatically transferring an appropiate inoculum volume from the different preculture microtiter wells to respective wells of the main culture plate, where subsequently similar growth kinetics could be obtained. The Robo-Lector generates extensive kinetic data in high-throughput cultivations, particularly for biomass and fluorescence protein formation. Based on the non-invasive on-line-monitoring signals, actions of the liquid-handling robot can easily be triggered. This interaction between the robot and the BioLector (Robo-Lector) combines high-content data generation with systematic high-throughput experimentation in an automated fashion, offering new possibilities to study biological production systems. The presented platform uses a standard liquid-handling workstation with widespread automation possibilities. Thus, high-throughput cultivations can now be combined with small-scale downstream processing techniques and analytical assays. Ultimately, this novel versatile platform can accelerate and intensify research and development in the field of systems biology as well as modelling and bioprocess optimization.

  12. Toward practical 3D radiography of pipeline girth welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wassink, Casper, E-mail: casper.wassink@applusrtd.com; Hol, Martijn, E-mail: martijn.hol@applusrtd.com; Flikweert, Arjan, E-mail: martijn.hol@applusrtd.com

    2015-03-31

    Digital radiography has made its way into in-the-field girth weld testing. With recent generations of detectors and x-ray tubes it is possible to reach the image quality desired in standards as well as the speed of inspection desired to be competitive with film radiography and automated ultrasonic testing. This paper will show the application of these technologies in the RTD Rayscan system. The method for achieving an image quality that complies with or even exceeds prevailing industrial standards will be presented, as well as the application on pipeline girth welds with CRA layers. A next step in development will bemore » to also achieve a measurement of weld flaw height to allow for performing an Engineering Critical Assessment on the weld. This will allow for similar acceptance limits as currently used with Automated Ultrasonic Testing of pipeline girth welds. Although a sufficient sizing accuracy was already demonstrated and qualified in the TomoCAR system, testing in some applications is restricted to time limits. The paper will present some experiments that were performed to achieve flaw height approximation within these time limits.« less

  13. Identifying problems and generating recommendations for enhancing complex systems: applying the abstraction hierarchy framework as an analytical tool.

    PubMed

    Xu, Wei

    2007-12-01

    This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.

  14. Long-term evaluation of TiO2-based 68Ge/68Ga generators and optimized automation of [68Ga]DOTATOC radiosynthesis.

    PubMed

    Lin, Mai; Ranganathan, David; Mori, Tetsuya; Hagooly, Aviv; Rossin, Raffaella; Welch, Michael J; Lapi, Suzanne E

    2012-10-01

    Interest in using (68)Ga is rapidly increasing for clinical PET applications due to its favorable imaging characteristics and increased accessibility. The focus of this study was to provide our long-term evaluations of the two TiO(2)-based (68)Ge/(68)Ga generators and develop an optimized automation strategy to synthesize [(68)Ga]DOTATOC by using HEPES as a buffer system. This data will be useful in standardizing the evaluation of (68)Ge/(68)Ga generators and automation strategies to comply with regulatory issues for clinical use. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. What Is an Automated External Defibrillator?

    MedlinePlus

    ANSWERS by heart Treatments + Tests What Is an Automated External Defibrillator? An automated external defibrillator (AED) is a lightweight, portable device ... ANSWERS by heart Treatments + Tests What Is an Automated External Defibrillator? detect a rhythm that should be ...

  16. From an automated flight-test management system to a flight-test engineer's workstation

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Brumbaugh, R. W.; Hewett, M. D.; Tartt, D. M.

    1992-01-01

    Described here are the capabilities and evolution of a flight-test engineer's workstation (called TEST PLAN) from an automated flight-test management system. The concept and capabilities of the automated flight-test management system are explored and discussed to illustrate the value of advanced system prototyping and evolutionary software development.

  17. PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD

    NASA Technical Reports Server (NTRS)

    Suhs, Norman E.; Rogers, Stuart E.; Dietz, William E.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    An all new, automated version of the PEGASUS software has been developed and tested. PEGASUS provides the hole-cutting and connectivity information between overlapping grids, and is used as the final part of the grid generation process for overset-grid computational fluid dynamics approaches. The new PEGASUS code (Version 5) has many new features: automated hole cutting; a projection scheme for fixing gaps in overset surfaces; more efficient interpolation search methods using an alternating digital tree; hole-size optimization based on adding additional layers of fringe points; and an automatic restart capability. The new code has also been parallelized using the Message Passing Interface standard. The parallelization performance provides efficient speed-up of the execution time by an order of magnitude, and up to a factor of 30 for very large problems. The results of three example cases are presented: a three-element high-lift airfoil, a generic business jet configuration, and a complete Boeing 777-200 aircraft in a high-lift landing configuration. Comparisons of the computed flow fields for the airfoil and 777 test cases between the old and new versions of the PEGASUS codes show excellent agreement with each other and with experimental results.

  18. Progress of artificial pancreas devices towards clinical use: the first outpatient studies.

    PubMed

    Russell, Steven J

    2015-04-01

    This article describes recent progress in the automated control of glycemia in type 1 diabetes with artificial pancreas devices that combine continuous glucose monitoring with automated decision-making and insulin delivery. After a gestation period of closely supervised feasibility studies in research centers, the last 2 years have seen publication of studies testing these devices in outpatient environments, and many more such studies are ongoing. The most basic form of automation, suspension of insulin delivery for actual or predicted hypoglycemia, has been shown to be effective and well tolerated, and a first-generation device has actually reached the market. Artificial pancreas devices that actively dose insulin fall into two categories, those that dose insulin alone and those that also use glucagon to prevent and treat hypoglycemia (bihormonal artificial pancreas). Initial outpatient clinical trials have shown that both strategies can improve glycemic management in comparison with patient-controlled insulin pump therapy, but only the bihormonal strategy has been tested without restrictions on exercise. Artificial pancreas technology has the potential to reduce acute and chronic complications of diabetes and mitigate the burden of diabetes self-management. Successful outpatient studies bring these technologies one step closer to availability for patients.

  19. An Intelligent Automation Platform for Rapid Bioprocess Design.

    PubMed

    Wu, Tianyi; Zhou, Yuhong

    2014-08-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.

  20. Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques

    PubMed Central

    Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J.; Nobukawa, Kazutoshi; Pan, Christopher S.

    2016-01-01

    Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs. PMID:27840592

  1. Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques.

    PubMed

    Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J; Nobukawa, Kazutoshi; Pan, Christopher S

    2017-03-01

    Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs.

  2. SU-G-TeP1-05: Development and Clinical Introduction of Automated Radiotherapy Treatment Planning for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkel, D; Bol, GH; Asselen, B van

    Purpose: To develop an automated radiotherapy treatment planning and optimization workflow for prostate cancer in order to generate clinical treatment plans. Methods: A fully automated radiotherapy treatment planning and optimization workflow was developed based on the treatment planning system Monaco (Elekta AB, Stockholm, Sweden). To evaluate our method, a retrospective planning study (n=100) was performed on patients treated for prostate cancer with 5 field intensity modulated radiotherapy, receiving a dose of 35×2Gy to the prostate and vesicles and a simultaneous integrated boost of 35×0.2Gy to the prostate only. A comparison was made between the dosimetric values of the automatically andmore » manually generated plans. Operator time to generate a plan and plan efficiency was measured. Results: A comparison of the dosimetric values show that automatically generated plans yield more beneficial dosimetric values. In automatic plans reductions of 43% in the V72Gy of the rectum and 13% in the V72Gy of the bladder are observed when compared to the manually generated plans. Smaller variance in dosimetric values is seen, i.e. the intra- and interplanner variability is decreased. For 97% of the automatically generated plans and 86% of the clinical plans all criteria for target coverage and organs at risk constraints are met. The amount of plan segments and monitor units is reduced by 13% and 9% respectively. Automated planning requires less than one minute of operator time compared to over an hour for manual planning. Conclusion: The automatically generated plans are highly suitable for clinical use. The plans have less variance and a large gain in time efficiency has been achieved. Currently, a pilot study is performed, comparing the preference of the clinician and clinical physicist for the automatic versus manual plan. Future work will include expanding our automated treatment planning method to other tumor sites and develop other automated radiotherapy workflows.« less

  3. Integrating machine learning techniques into robust data enrichment approach and its application to gene expression data.

    PubMed

    Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas

    2013-01-01

    The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.

  4. SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisling, K; Zhang, L; Yang, J

    Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less

  5. Automated Tutoring in Interactive Environments: A Task-Centered Approach.

    ERIC Educational Resources Information Center

    Wolz, Ursula; And Others

    1989-01-01

    Discusses tutoring and consulting functions in interactive computer environments. Tutoring strategies are considered, the expert model and the user model are described, and GENIE (Generated Informative Explanations)--an answer generating system for the Berkeley Unix Mail system--is explained as an example of an automated consulting system. (33…

  6. Final report and recommendations for research on human-automation interaction in the Next Generation Air Transportation System

    DOT National Transportation Integrated Search

    2006-11-01

    This is the final report of an 18-month project to: (1) review Next Generation Air Transportation System (NGATS) Joint Planning and Development Office (JPDO) documents as they pertain to human-automation interaction; (2) review past system failures i...

  7. Speciation analysis of arsenic in biological matrices by automated hydride generation-cryotrapping-atomic absorption spectrometry with multiple microflame quartz tube atomizer (multiatomizer).

    EPA Science Inventory

    This paper describes an automated system for the oxidation state specific speciation of inorganic and methylated arsenicals by selective hydride generation - cryotrapping- gas chromatography - atomic absorption spectrometry with the multiatomizer. The corresponding arsines are ge...

  8. Automated Planning and Scheduling for Planetary Rover Distributed Operations

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Rabideau, Gregg; Tso, Kam S.; Chien, Steve

    1999-01-01

    Automated planning and Scheduling, including automated path planning, has been integrated with an Internet-based distributed operations system for planetary rover operations. The resulting prototype system enables faster generation of valid rover command sequences by a distributed planetary rover operations team. The Web Interface for Telescience (WITS) provides Internet-based distributed collaboration, the Automated Scheduling and Planning Environment (ASPEN) provides automated planning and scheduling, and an automated path planner provided path planning. The system was demonstrated on the Rocky 7 research rover at JPL.

  9. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  10. An automated testing tool for traffic signal controller functionalities.

    DOT National Transportation Integrated Search

    2010-03-01

    The purpose of this project was to develop an automated tool that facilitates testing of traffic controller functionality using controller interface device (CID) technology. Benefits of such automated testers to traffic engineers include reduced test...

  11. From an automated flight-test management system to a flight-test engineer's workstation

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Brumbaugh, Randal W.; Hewett, M. D.; Tartt, D. M.

    1991-01-01

    The capabilities and evolution is described of a flight engineer's workstation (called TEST-PLAN) from an automated flight test management system. The concept and capabilities of the automated flight test management systems are explored and discussed to illustrate the value of advanced system prototyping and evolutionary software development.

  12. Data Validation in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.; hide

    2010-01-01

    We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets

  13. A Recommendation Algorithm for Automating Corollary Order Generation

    PubMed Central

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  14. A recommendation algorithm for automating corollary order generation.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; McCoy, J M

    2009-11-14

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.

  15. Rapid, scalable and highly automated HLA genotyping using next-generation sequencing: a transition from research to diagnostics

    PubMed Central

    2013-01-01

    Background Human leukocyte antigen matching at allelic resolution is proven clinically significant in hematopoietic stem cell transplantation, lowering the risk of graft-versus-host disease and mortality. However, due to the ever growing HLA allele database, tissue typing laboratories face substantial challenges. In light of the complexity and the high degree of allelic diversity, it has become increasingly difficult to define the classical transplantation antigens at high-resolution by using well-tried methods. Thus, next-generation sequencing is entering into diagnostic laboratories at the perfect time and serving as a promising tool to overcome intrinsic HLA typing problems. Therefore, we have developed and validated a scalable automated HLA class I and class II typing approach suitable for diagnostic use. Results A validation panel of 173 clinical and proficiency testing samples was analysed, demonstrating 100% concordance to the reference method. From a total of 1,273 loci we were able to generate 1,241 (97.3%) initial successful typings. The mean ambiguity reduction for the analysed loci was 93.5%. Allele assignment including intronic sequences showed an improved resolution (99.2%) of non-expressed HLA alleles. Conclusion We provide a powerful HLA typing protocol offering a short turnaround time of only two days, a fully integrated workflow and most importantly a high degree of typing reliability. The presented automated assay is flexible and can be scaled by specific primer compilations and the use of different 454 sequencing systems. The workflow was successfully validated according to the policies of the European Federation for Immunogenetics. Next-generation sequencing seems to become one of the new methods in the field of Histocompatibility. PMID:23557197

  16. SlugIn 1.0: A Free Tool for Automated Slug Test Analysis.

    PubMed

    Martos-Rosillo, Sergio; Guardiola-Albert, Carolina; Padilla Benítez, Alberto; Delgado Pastor, Joaquín; Azcón González, Antonio; Durán Valsero, Juan José

    2018-05-01

    The correct characterization of aquifer parameters is essential for water-supply and water-quality investigations. Slug tests are widely used for these purposes. While free software is available to interpret slug tests, some codes are not user-friendly, or do not include a wide range of methods to interpret the results, or do not include automatic, inverse solutions to the test data. The private sector has also generated several good programs to interpret slug test data, but they are not free of charge. The computer program SlugIn 1.0 is available online for free download, and is demonstrated to aid in the analysis of slug tests to estimate hydraulic parameters. The program provides an easy-to-use Graphical User Interface. SlugIn 1.0 incorporates automated parameter estimation and facilitates the visualization of several interpretations of the same test. It incorporates solutions for confined and unconfined aquifers, partially penetrating wells, skin effects, shape factor, anisotropy, high hydraulic conductivity formations and the Mace test for large-diameter wells. It is available in English and Spanish and can be downloaded from the web site of the Geological Survey of Spain. Two field examples are presented to illustrate how the software operates. © 2018, National Ground Water Association.

  17. Moving beyond the pros and cons of automating cognitive testing in pathological aging and dementia: the case for equal opportunity.

    PubMed

    Wesnes, Keith A

    2014-01-01

    The lack of progress over the last decade in developing treatments for Alzheimer's disease has called into question the quality of the cognitive assessments used while also shifting the emphasis from treatment to prophylaxis by studying the disorder at earlier stages, even prior to the development of cognitive symptoms. This has led various groups to seek cognitive tests which are more sensitive than those currently used and which can be meaningfully administered to individuals with mild or even no cognitive impairment. Although computerized tests have long been used in this field, they have made little inroads compared with non-automated tests. This review attempts to put in perspective the relative utilities of automated and non-automated tests of cognitive function in therapeutic trials of pathological aging and the dementias. Also by a review of the automation of cognitive tests over the last 150 years, it is hoped that the notion that such procedures are novel compared with pencil-and-paper testing will be dispelled. Furthermore, data will be presented to illustrate that older individuals and patients with dementia are neither stressed nor disadvantaged when tested with appropriately developed computerized methods. An important aspect of automated testing is that it can assess all aspects of task performance, including the speed of cognitive processes, and data are presented on the advantages this can confer in clinical trials. The ultimate objectives of the review are to encourage decision making in the field to move away from the automated/non-automated dichotomy and to develop criteria pertinent to each trial against which all available procedures are evaluated. If we are to make serious progress in this area, we must use the best tools available, and the evidence suggests that automated testing has earned the right to be judged against the same criteria as non-automated tests.

  18. Automated generation of image products for Mars Exploration Rover Mission tactical operations

    NASA Technical Reports Server (NTRS)

    Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen

    2005-01-01

    This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.

  19. Automated real-time data acquisition and analysis of cardiorespiratory function.

    PubMed

    Moorman, R C; Mackenzie, C F; Ho, G H; Barnas, G M; Wilson, P D; Matjasko, M J

    1991-01-01

    Microcomputer generation of an automated record without complexity or operator intervention is desirable in many circumstances. We developed a microcomputer system specifically designed for simplified automated collection of cardiorespiratory data in research and clinical environments. We tested the system during possible extreme clinical conditions by comparison with a patient simulator. Ranges used were heart rate of 35-182 beats per minute, systemic blood pressures of 65-147 mmHg and venous blood pressures of 14-37 mmHg, all with superimposed respiratory variation of 0-24 mmHg. We also tested multiple electrocardiographic dysrhythmias. The results showed that there were no clinically relevant differences in vascular pressures, heart rate, and other variables between computer processed and simulator values. Manually and computer recorded physiological variables were compared to simulator values and the results show that computer values were more accurate. The system was used routinely in 21 animal research experiments over a 4 month period employing a total of 270 collection periods. The file system integrity was tested and found to be satisfactory, even during power failures. Unlike other data collection systems this one (1) requires little or no operator intervention and training, (2) has been rigorously tested for accuracy using a wide variety of extreme patient conditions, (3) has had computer derived values measured against a standardized reference, (4) is reliable against external sources of computer failure, and (5) has screen and printout presentations with quick and easily understandable formats.

  20. 78 FR 27984 - Modification of the National Customs Automation Program Test (NCAP) Regarding Reconciliation for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    ... Customs Automation Program Test (NCAP) Regarding Reconciliation for Filing Certain Post-Importation Claims... Automation Program (NCAP) Reconciliation prototype test to include the filing of post-importation [[Page... notices. DATES: The test is modified to allow Reconciliation of post-importation preferential tariff...

  1. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  2. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  3. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  4. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  5. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  6. Automated Conflict Resolution For Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    2005-01-01

    The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.

  7. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  8. An Intelligent Automation Platform for Rapid Bioprocess Design

    PubMed Central

    Wu, Tianyi

    2014-01-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579

  9. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  10. Automated Tetrahedral Mesh Generation for CFD Analysis of Aircraft in Conceptual Design

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu; Campbell, Richard L.

    2014-01-01

    The paper introduces an automation process of generating a tetrahedral mesh for computational fluid dynamics (CFD) analysis of aircraft configurations in early conceptual design. The method was developed for CFD-based sonic boom analysis of supersonic configurations, but can be applied to aerodynamic analysis of aircraft configurations in any flight regime.

  11. Determining RNA quality for NextGen sequencing: some exceptions to the gold standard rule of 23S to 16S rRNA ratio

    USDA-ARS?s Scientific Manuscript database

    Using next-generation-sequencing technology to assess entire transcriptomes requires high quality starting RNA. Currently, RNA quality is routinely judged using automated microfluidic gel electrophoresis platforms and associated algorithms. Here we report that such automated methods generate false-n...

  12. Automation in the Teaching of Descriptive Geometry and CAD. High-Level CAD Templates Using Script Languages

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.

  13. JWST Associations overview: automated generation of combined products

    NASA Astrophysics Data System (ADS)

    Alexov, Anastasia; Swade, Daryl; Bushouse, Howard; Diaz, Rosa; Eisenhamer, Jonathan; Hack, Warren; Kyprianou, Mark; Levay, Karen; Rahmani, Christopher; Swam, Mike; Valenti, Jeff

    2018-01-01

    We are presenting the design of the James Webb Space Telescope (JWST) Data Management System (DMS) automated processing of Associations. An Association captures the relationship between exposures and higher level data products, such as combined mosaics created from dithered and tiled observations. The astronomer’s intent is captured within the Proposal Planning System (PPS) and provided to DMS as candidate associations. These candidates are converted into Association Pools and Association Generator Tables that serve as input to automated processing which create the combined data products. Association Pools are generated to capture a list of exposures that could potentially form associations and provide relevant information about those exposures. The Association Generator using definitions on groupings creates one or more Association Tables from a single input Association Pool. Each Association Table defines a set of exposures to be combined and the ruleset of the combination to be performed; the calibration software creates Associated data products based on these input tables. The initial design produces automated Associations within a proposal. Additionally this JWST overall design is conducive to eventually produce Associations for observations from multiple proposals, similar to the Hubble Legacy Archive (HLA).

  14. Sequence-of-events-driven automation of the deep space network

    NASA Technical Reports Server (NTRS)

    Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.

    1996-01-01

    In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.

  15. Sequence-of-Events-Driven Automation of the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Hill, R., Jr.; Fayyad, K.; Smyth, C.; Santos, T.; Chen, R.; Chien, S.; Bevan, R.

    1996-01-01

    In February 1995, sequence-of-events (SOE)-driven automation technology was demonstrated for a Voyager telemetry downlink track at DSS 13. This demonstration entailed automated generation of an operations procedure (in the form of a temporal dependency network) from project SOE information using artificial intelligence planning technology and automated execution of the temporal dependency network using the link monitor and control operator assistant system. This article describes the overall approach to SOE-driven automation that was demonstrated, identifies gaps in SOE definitions and project profiles that hamper automation, and provides detailed measurements of the knowledge engineering effort required for automation.

  16. Creating and virtually screening databases of fluorescently-labelled compounds for the discovery of target-specific molecular probes

    NASA Astrophysics Data System (ADS)

    Kamstra, Rhiannon L.; Dadgar, Saedeh; Wigg, John; Chowdhury, Morshed A.; Phenix, Christopher P.; Floriano, Wely B.

    2014-11-01

    Our group has recently demonstrated that virtual screening is a useful technique for the identification of target-specific molecular probes. In this paper, we discuss some of our proof-of-concept results involving two biologically relevant target proteins, and report the development of a computational script to generate large databases of fluorescence-labelled compounds for computer-assisted molecular design. The virtual screening of a small library of 1,153 fluorescently-labelled compounds against two targets, and the experimental testing of selected hits reveal that this approach is efficient at identifying molecular probes, and that the screening of a labelled library is preferred over the screening of base compounds followed by conjugation of confirmed hits. The automated script for library generation explores the known reactivity of commercially available dyes, such as NHS-esters, to create large virtual databases of fluorescence-tagged small molecules that can be easily synthesized in a laboratory. A database of 14,862 compounds, each tagged with the ATTO680 fluorophore was generated with the automated script reported here. This library is available for downloading and it is suitable for virtual ligand screening aiming at the identification of target-specific fluorescent molecular probes.

  17. Biofabricated constructs as tissue models: a short review.

    PubMed

    Costa, Pedro F

    2015-04-01

    Biofabrication is currently able to provide reliable models for studying the development of cells and tissues into multiple environments. As the complexity of biofabricated constructs is becoming increasingly higher their ability to closely mimic native tissues and organs is also increasing. Various biofabrication technologies currently allow to precisely build cell/tissue constructs at multiple dimension ranges with great accuracy. Such technologies are also able to assemble together multiple types of cells and/or materials and generate constructs closely mimicking various types of tissues. Furthermore, the high degree of automation involved in these technologies enables the study of large arrays of testing conditions within increasingly smaller and automated devices both in vitro and in vivo. Despite not yet being able to generate constructs similar to complex tissues and organs, biofabrication is rapidly evolving in that direction. One major hurdle to be overcome in order for such level of complex detail to be achieved is the ability to generate complex vascular structures within biofabricated constructs. This review describes several of the most relevant technologies and methodologies currently utilized within biofabrication and provides as well a brief overview of their current and future potential applications.

  18. 78 FR 53466 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image System (DIS) and Simplified Entry (SE); Correction AGENCY: U.S. Customs and Border Protection, Department...

  19. Automated protein NMR structure determination using wavelet de-noised NOESY spectra.

    PubMed

    Dancea, Felician; Günther, Ulrich

    2005-11-01

    A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking.

  20. Automated Source-Code-Based Testing of Object-Oriented Software

    NASA Astrophysics Data System (ADS)

    Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten

    2014-08-01

    With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.

  1. A Case Study of Reverse Engineering Integrated in an Automated Design Process

    NASA Astrophysics Data System (ADS)

    Pescaru, R.; Kyratsis, P.; Oancea, G.

    2016-11-01

    This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.

  2. THE DECADE OF THE RABiT (2005–15)

    PubMed Central

    Garty, G.; Turner, H. C.; Salerno, A.; Bertucci, A.; Zhang, J.; Chen, Y.; Dutta, A.; Sharma, P.; Bian, D.; Taveras, M.; Wang, H.; Bhatla, A.; Balajee, A.; Bigelow, A. W.; Repin, M.; Lyulko, O. V.; Simaan, N.; Yao, Y. L.; Brenner, D. J.

    2016-01-01

    The RABiT (Rapid Automated Biodosimetry Tool) is a dedicated Robotic platform for the automation of cytogenetics-based biodosimetry assays. The RABiT was developed to fulfill the critical requirement for triage following a mass radiological or nuclear event. Starting from well-characterized and accepted assays we developed a custom robotic platform to automate them. We present here a brief historical overview of the RABiT program at Columbia University from its inception in 2005 until the RABiT was dismantled at the end of 2015. The main focus of this paper is to demonstrate how the biological assays drove development of the custom robotic systems and in turn new advances in commercial robotic platforms inspired small modifications in the assays to allow replacing customized robotics with ‘off the shelf’ systems. Currently, a second-generation, RABiT II, system at Columbia University, consisting of a PerkinElmer cell::explorer, was programmed to perform the RABiT assays and is undergoing testing and optimization studies. PMID:27412510

  3. Visualization for Hyper-Heuristics. Front-End Graphical User Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroenung, Lauren

    Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address thesemore » issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.« less

  4. An automated high throughput tribometer for adhesion, wear, and friction measurements

    NASA Astrophysics Data System (ADS)

    Kalihari, Vivek; Timpe, Shannon J.; McCarty, Lyle; Ninke, Matthew; Whitehead, Jim

    2013-03-01

    Understanding the origin and correlation of different surface properties under a multitude of operating conditions is critical in tribology. Diverse tribological properties and a lack of a single instrument to measure all make it difficult to compare and correlate properties, particularly in light of the wide range of interfaces commonly investigated. In the current work, a novel automated tribometer has been designed and validated, providing a unique experimental platform capable of high throughput adhesion, wear, kinetic friction, and static friction measurements. The innovative design aspects are discussed that allow for a variety of probes, sample surfaces, and testing conditions. Critical components of the instrument and their design criteria are described along with examples of data collection schemes. A case study is presented with multiple surface measurements performed on a set of characteristic substrates. Adhesion, wear, kinetic friction, and static friction are analyzed and compared across surfaces, highlighting the comprehensive nature of the surface data that can be generated using the automated high throughput tribometer.

  5. Automation of learning-set testing - The video-task paradigm

    NASA Technical Reports Server (NTRS)

    Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.

    1989-01-01

    Researchers interested in studying discrimination learning in primates have typically utilized variations in the Wisconsin General Test Apparatus (WGTA). In the present experiment, a new testing apparatus for the study of primate learning is proposed. In the video-task paradigm, rhesus monkeys (Macaca mulatta) respond to computer-generated stimuli by manipulating a joystick. Using this apparatus, discrimination learning-set data for 2 monkeys were obtained. Performance on Trial 2 exceeded 80 percent within 200 discrimination learning problems. These data illustrate the utility of the video-task paradigm in comparative research. Additionally, the efficient learning and rich data that were characteristic of this study suggest several advantages of the present testing paradigm over traditional WGTA testing.

  6. SUN: A fully automated interferometric test bench aimed at measuring photolithographic grade lenses with a sub nanometer accuracy

    NASA Astrophysics Data System (ADS)

    Bourgois, R.; Hamy, A. L.; Pourcelot, P.

    2017-10-01

    SUN is a test bench developed by Safran Reosc to measure spherical or aspherical surface errors of litho-grade lenses with sub-nanometer accuracy. SUN provides full aperture high resolution interferometric measurements. Measurements are performed at the center of curvature using high precision transmission sphere (TS), and Computer Generated Holograms (CGH) for aspheres, in order to light the surface at normal incidence. SUN can measure lenses with diameter up to 350mm and a radius of curvature varying from 60 to 3000 mm.

  7. Simulation Test Of Descent Advisor

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.

    1991-01-01

    Report describes piloted-simulation test of Descent Advisor (DA), subsystem of larger automation system being developed to assist human air-traffic controllers and pilots. Focuses on results of piloted simulation, in which airline crews executed controller-issued descent advisories along standard curved-path arrival routes. Crews able to achieve arrival-time precision of plus or minus 20 seconds at metering fix. Analysis of errors generated in turns resulted in further enhancements of algorithm to increase accuracies of its predicted trajectories. Evaluations by pilots indicate general support for DA concept and provide specific recommendations for improvement.

  8. Netwar

    NASA Astrophysics Data System (ADS)

    Keen, Arthur A.

    2006-04-01

    This paper describes technology being developed at 21st Century Technologies to automate Computer Network Operations (CNO). CNO refers to DoD activities related to Attacking and Defending Computer Networks (CNA & CND). Next generation cyber threats are emerging in the form of powerful Internet services and tools that automate intelligence gathering, planning, testing, and surveillance. We will focus on "Search-Engine Hacks", queries that can retrieve lists of router/switch/server passwords, control panels, accessible cameras, software keys, VPN connection files, and vulnerable web applications. Examples include "Titan Rain" attacks against DoD facilities and the Santy worm, which identifies vulnerable sites by searching Google for URLs containing application-specific strings. This trend will result in increasingly sophisticated and automated intelligence-driven cyber attacks coordinated across multiple domains that are difficult to defeat or even understand with current technology. One traditional method of CNO relies on surveillance detection as an attack predictor. Unfortunately, surveillance detection is difficult because attackers can perform search engine-driven surveillance such as with Google Hacks, and avoid touching the target site. Therefore, attack observables represent only about 5% of the attacker's total attack time, and are inadequate to provide warning. In order to predict attacks and defend against them, CNO must also employ more sophisticated techniques and work to understand the attacker's Motives, Means and Opportunities (MMO). CNO must use automated reconnaissance tools, such as Google, to identify information vulnerabilities, and then utilize Internet tools to observe the intelligence gathering, planning, testing, and collaboration activities that represent 95% of the attacker's effort.

  9. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1983-01-01

    The current work in progress for the SAGA project are described. The highlights of this research are: a parser independent SAGA editor, design for the screen editing facilities of the editor, delivery to NASA of release 1 of Olorin, the SAGA parser generator, personal workstation environment research, release 1 of the SAGA symbol table manager, delta generation in SAGA, requirements for a proof management system, documentation for and testing of the cyber pascal make prototype, a prototype cyber-based slicing facility, a June 1984 demonstration plan, SAGA utility programs, summary of UNIX software engineering support, and theorem prover review.

  10. E-novo: an automated workflow for efficient structure-based lead optimization.

    PubMed

    Pearce, Bradley C; Langley, David R; Kang, Jia; Huang, Hongwei; Kulkarni, Amit

    2009-07-01

    An automated E-Novo protocol designed as a structure-based lead optimization tool was prepared through Pipeline Pilot with existing CHARMm components in Discovery Studio. A scaffold core having 3D binding coordinates of interest is generated from a ligand-bound protein structural model. Ligands of interest are generated from the scaffold using an R-group fragmentation/enumeration tool within E-Novo, with their cores aligned. The ligand side chains are conformationally sampled and are subjected to core-constrained protein docking, using a modified CHARMm-based CDOCKER method to generate top poses along with CDOCKER energies. In the final stage of E-Novo, a physics-based binding energy scoring function ranks the top ligand CDOCKER poses using a more accurate Molecular Mechanics-Generalized Born with Surface Area method. Correlation of the calculated ligand binding energies with experimental binding affinities were used to validate protocol performance. Inhibitors of Src tyrosine kinase, CDK2 kinase, beta-secretase, factor Xa, HIV protease, and thrombin were used to test the protocol using published ligand crystal structure data within reasonably defined binding sites. In-house Respiratory Syncytial Virus inhibitor data were used as a more challenging test set using a hand-built binding model. Least squares fits for all data sets suggested reasonable validation of the protocol within the context of observed ligand binding poses. The E-Novo protocol provides a convenient all-in-one structure-based design process for rapid assessment and scoring of lead optimization libraries.

  11. A Robotic Platform for Quantitative High-Throughput Screening

    PubMed Central

    Michael, Sam; Auld, Douglas; Klumpp, Carleen; Jadhav, Ajit; Zheng, Wei; Thorne, Natasha; Austin, Christopher P.; Inglese, James

    2008-01-01

    Abstract High-throughput screening (HTS) is increasingly being adopted in academic institutions, where the decoupling of screening and drug development has led to unique challenges, as well as novel uses of instrumentation, assay formulations, and software tools. Advances in technology have made automated unattended screening in the 1,536-well plate format broadly accessible and have further facilitated the exploration of new technologies and approaches to screening. A case in point is our recently developed quantitative HTS (qHTS) paradigm, which tests each library compound at multiple concentrations to construct concentration-response curves (CRCs) generating a comprehensive data set for each assay. The practical implementation of qHTS for cell-based and biochemical assays across libraries of > 100,000 compounds (e.g., between 700,000 and 2,000,000 sample wells tested) requires maximal efficiency and miniaturization and the ability to easily accommodate many different assay formats and screening protocols. Here, we describe the design and utilization of a fully integrated and automated screening system for qHTS at the National Institutes of Health's Chemical Genomics Center. We report system productivity, reliability, and flexibility, as well as modifications made to increase throughput, add additional capabilities, and address limitations. The combination of this system and qHTS has led to the generation of over 6 million CRCs from > 120 assays in the last 3 years and is a technology that can be widely implemented to increase efficiency of screening and lead generation. PMID:19035846

  12. Random Changes of Accommodation Stimuli: An Automated Extension of the Flippers Accommodative Facility Test.

    PubMed

    Otero, Carles; Aldaba, Mikel; López, Silvia; Díaz-Doutón, Fernando; Vera-Díaz, Fuensanta A; Pujol, Jaume

    2018-06-01

    To study the accommodative dynamics for predictable and unpredictable stimuli using manual and automated accommodative facility tests Materials and Methods: Seventeen young healthy subjects were tested monocularly in two consecutive sessions, using five different conditions. Two conditions replicated the conventional monocular accommodative facility tests for far and near distances, performed with manually held flippers. The other three conditions were automated and conducted using an electro-optical system and open-field autorefractor. Two of the three automated conditions replicated the predictable manual accommodative facility tests. The last automated condition was a hybrid approach using a novel method whereby far and near-accommodative-facility tests were randomly integrated into a single test of four unpredictable accommodative demands. The within-subject standard deviations for far- and near-distance-accommodative reversals were (±1,±1) cycles per minute (cpm) for the manual flipper accommodative facility conditions and (±3, ±4) cpm for the automated conditions. The 95% limits of agreement between the manual and the automated conditions for far and near distances were poor: (-18, 12) and (-15, 3). During the hybrid unpredictable condition, the response time and accommodative response parameters were significantly (p < 0.05) larger for accommodation than disaccommodation responses for high accommodative demands only. The response times during the transitions 0.17/2.17 D and 0.50/4.50 D appeared to be indistinguishable between the hybrid unpredictable and the conventional predictable automated tests. The automated accommodative facility test does not agree with the manual flipper test results. Operator delays in flipping the lens may account for these differences. This novel test, using unpredictable stimuli, provides a more comprehensive examination of accommodative dynamics than conventional manual accommodative facility tests. Unexpectedly, the unpredictability of the stimulus did not to affect accommodation dynamics. Further studies are needed to evaluate the sensitivity of this novel hybrid technique on individuals with accommodative anomalies.

  13. The cobas® 6800/8800 System: a new era of automation in molecular diagnostics.

    PubMed

    Cobb, Bryan; Simon, Christian O; Stramer, Susan L; Body, Barbara; Mitchell, P Shawn; Reisch, Natasa; Stevens, Wendy; Carmona, Sergio; Katz, Louis; Will, Stephen; Liesenfeld, Oliver

    2017-02-01

    Molecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.

  14. [Health technology assessment report: Computer-assisted Pap test for cervical cancer screening].

    PubMed

    Della Palma, Paolo; Moresco, Luca; Giorgi Rossi, Paolo

    2012-01-01

    HEALTH PROBLEM: Cervical cancer is a disease which is highly preventable by means of Pap test screening for the precancerous lesions, which can be easily treated. Furthermore, in the near future, control of the disease will be enhanced by the vaccination which prevents the infection of those human papillomavirus types that cause the vast majority of cervical cancers. The effectiveness of screening in drastically reducing cervical cancer incidence has been clearly demonstrated. The epidemiology of cervical cancer in industrialised countries is now determined mostly by the Pap test coverage of the female population and by the ability of health systems to assure appropriate follow up after an abnormal Pap test. Today there are two fully automated systems for computer-assisted Pap test: the BD FocalPoint and the Hologic Imager. Recently, the Hologic Integrated Imager, a semi-automated system, was launched. The two fully automated systems are composed of a central scanner, where the machine examines the cytologic slide, and of one or more review stations, where the cytologists analyze the slides previously centrally scanned. The softwares used by the two systems identify the fields of interest so that the cytologists can look only at those points, automatically pointed out by the review station. Furthermore, the FocalPoint system classifies the slides according to their level of risk of containing signs of relevant lesions. Those in the upper classes--about one fifth of the slides--are labelled as « further review », while those in the lower level of risk, i.e. slides that have such a low level of risk that they can be considered as negative with no human review, are labelled as « no further review ». The aim of computer-assisted Pap test is to reduce the time of slide examination and to increase productivity. Furthermore, the number of errors due to lack of attention may decrease. Both the systems can be applied to liquidbased cytology, while only the BD Focal Point can be used on conventional smears. Cytology screening has some critical points: there is a shortage of cytologists/cytotechnicians; the quality strongly depends on the experience and ability of the cytologist; there is a subjective component in the cytological diagnosis; in highly screened populations, the prevalence of lesions is very low and the activity of cytologists is very monotonous. On the other hand, a progressive shift to molecular screening using HPV-DNA test as primary screening test is very likely in the near future; cytology will be used as triage test, dramatically reducing the number of slides to process and increasing the prevalence of lesions in those Pap tests. In this Report we assume that the diagnostic accuracy of computer-assisted Pap test is equal to the accuracy of manual Pap test and, consequently, that screening using computer-assisted Pap test has the same efficacy in reducing cervical cancer incidence and mortality. Under this assumption, the effectiveness/ benefit/utility is the same for the two screening modes, i.e. the economic analysis will be a cost minimization study. Furthermore, the screening process is identical for the two modalities in all the phases except for slide interpretation. The cost minimization analysis will be limited to the only phase differing between the two modes, i.e. the study will be a differential cost analysis between a labour-intensive strategy (traditional Pap test) and a technology-intensive strategy (the computer-assisted Pap test). Briefly, the objectives of this HTA Report are: to determine the break even point of computer-assisted Pap test systems, i.e. the volume of slides processed per year at which putting in place a computer-assisted Pap test system becomes economically convenient; to quantify the cost per Pap test in different scenarios according to screening centre activity volume, productivity of cytologist, type of cytology (conventional smear or liquid-based, fully automated or semi-automated computer-assisted); to analyse the computer-assisted Pap test in the Italian context, through a survey of the centres using the technology, collecting data useful for the sensitivity analysis of the economic evaluation; to evaluate the acceptability of the technology in the screening services; to evaluate the organizational and financial impact of the computer-assisted Pap test in different scenarios; to illustrate the ideal organization to implement computer-assisted Pap test in terms of volume of activity, productivity, and human and technological resources. to produce this Report, the following process was adopted: application to the Ministry of health for a grant « Analysis of the impact of professional involvement in evidence generation for the HTA process »; within this project, the sub-project « Cost effectiveness evaluation of the computer-assisted Pap test in the Italian screening programmes » was financed; constitution of the Working Group, which included the project coordinator, the principal investigator, and the health economist; identification of the centres using the computer-assisted Pap test and which had published scientific reports on the subject; identification of the Consulting Committee (stakeholder), which included screening programmes managers, pathologists, economists, health policy-makers, citizen organizations, and manufacturers. Once the evaluation was concluded, a plenary meeting with Working Group and Consulting Committee was held. The working group drafted the final version of this Report, which took into account the comments received. the fully automated computer-assisted Pap test has an important financial and organizational impact on screening programmes. The assessment of this health technology reached the following conclusions: according to the survey results, after some distrust, cytologists accepted the use of the machine and appreciated the reduction in interpretation time and the reliability in identifying the fields of interest; from an economic point of view, the automated computer-assisted Pap test can be convenient only with conventional smears if the screening centre has a volume of more than 49,000 slides/year and the cytologist productivity increases about threefold. It must be highlighted that it is not sufficient to adopt the automated Pap test to reach such an increase in productivity; the laboratory must be organised or re-organised to optimise the use of the review stations and the person time. In the case of liquid-based cytology, the adoption of automated computer- assisted Pap test can only increase the costs. In fact, liquid-based cytology increases the cost of consumable materials but reduces the interpretation time, even in manual screening. Consequently, the reduction of human costs is smaller in the case of computer-assisted screening. Liquid-based cytology has other implications and advantages not linked to the use of computer-assisted Pap test that should be taken into account and are beyond the scope of this Report; given that the computer-assisted Pap test reduces human costs, it may be more advantageous where the cost of cytologists is higher; given the relatively small volume of activity of screening centres in Italy, computer-assisted Pap test may be reasonable for a network using only one central scanner and several remote review stations; the use of automated computer-assisted Pap test only for quality control in a single centre is not economically sustainable. In this case as well, several centres, for example at the regional level, may form a consortium to reach a reasonable number of slides to achieve the break even point. Regarding the use of a machine rather than human intelligence to interpret the slides, some ethical issues were initially raised, but both the scientific community and healthcare professionals have accepted this technology. The identification of fields of interest by the machine is highly reproducible, reducing subjectivity in the diagnostic process. The Hologic system always includes a check by the human eye, while the FocalPoint system identifies about one fifth of the slides as No Further Review. Several studies, some of which conducted in Italy, confirmed the reliability of this classification. There is still some resistance to accept the practice of No Further Review. A check of previous slides and clinical data can be useful to make the cytologist and the clinician more confident. Computer-assisted automated Pap test may be introduced only if there is a need to increase the volume of slides screened to cover the screening target population and sufficient human resources are not available. Switching a programme using conventional slides to automatic scanning can only lead to a reduction in costs if the volume of slides per year exceeds 49,000 slides/annum and cytologist productivity is optimised to more than 20,000 slides per year. At a productivity of 15,000 or fewer, the automated computer-assisted Pap test cannot be convenient. Switching from manual screening with conventional slides to automatic scanning with liquid-based cytology cannot generate any economic saving, but the system could increase output with a given number of staff. The transition from manual to computer assisted automated screening of liquid based cytology will not generate savings and the increase in productivity will be lower than that of the switch from manual/conventional to automated/conventional. The use of biologists or pathologists as cytologists is more costly than the use of cytoscreeners. Given that the automated computer-assisted Pap test reduces human resource costs, its adoption in a model using only biologists and pathologists for screening is more economically advantageous. (ABSTRACT TRUNCATED)

  15. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    PubMed

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  16. Towards surgeon-authored VR training: the scene-development cycle.

    PubMed

    Dindar, Saleh; Nguyen, Thien; Peters, Jörg

    2016-01-01

    Enabling surgeon-educators to themselves create virtual reality (VR) training units promises greater variety, specialization, and relevance of the units. This paper describes a software bridge that semi-automates the scene-generation cycle, a key bottleneck in authoring, modeling, and developing VR units. Augmenting an open source modeling environment with physical behavior attachment and collision specifications yields single-click testing of the full force-feedback enabled anatomical scene.

  17. Point-of-Care Test Equipment for Flexible Laboratory Automation.

    PubMed

    You, Won Suk; Park, Jae Jun; Jin, Sung Moon; Ryew, Sung Moo; Choi, Hyouk Ryeol

    2014-08-01

    Blood tests are some of the core clinical laboratory tests for diagnosing patients. In hospitals, an automated process called total laboratory automation, which relies on a set of sophisticated equipment, is normally adopted for blood tests. Noting that the total laboratory automation system typically requires a large footprint and significant amount of power, slim and easy-to-move blood test equipment is necessary for specific demands such as emergency departments or small-size local clinics. In this article, we present a point-of-care test system that can provide flexibility and portability with low cost. First, the system components, including a reagent tray, dispensing module, microfluidic disk rotor, and photometry scanner, and their functions are explained. Then, a scheduler algorithm to provide a point-of-care test platform with an efficient test schedule to reduce test time is introduced. Finally, the results of diagnostic tests are presented to evaluate the system. © 2014 Society for Laboratory Automation and Screening.

  18. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  19. A Comparison of Automated and Manual Crater Counting Techniques in Images of Elysium Planitia.

    NASA Astrophysics Data System (ADS)

    Plesko, C. S.; Brumby, S. P.; Asphaug, E.

    2004-11-01

    Surveys of impact craters yield a wealth of information about Martian geology, providing clues to the relative age, local composition and erosional history of the surface. Martian craters are also of intrinsic geophysical interest, given that the processes by which they form are not entirely clear, especially cratering in ice-saturated regoliths (Plesko et al. 2004, AGU) which appear common on Mars (Squyres and Carr 1986). However, the deluge of data over the last decade has made comprehensive manual counts prohibitive, except in select regions. Given that most small craters on Mars may be secondaries from a few very recent impact events (McEwen et al. in press, Icarus 2004), using select regions for age dating introduces considerable potential for sampling error. Automation is thus an enabling planetary science technology. In contrast to machine counts, human counts are prone to human decision making, thus not intrinsically reproducible. One can address human "noise" by averaging over many human counts (Kanefsky et al. 2001), but this multiplies the already laborious effort required. In this study, we test automated crater counting algorithms developed with the Los Alamos National Laboratory genetic programming suite GENIE (Harvey et al., 2002) against established manual counts of craters in Elysium Planitia, using MOC and THEMIS data. We intend to establish the validity of our method against well-regarded hand counts (Hartmann et al. 2000), and then apply it generally to larger regions of Mars. Previous work on automated crater counting used customized algorithms (Bierhaus et al. 2003, Burl et al.. 2001). Algorithms generated by genetic programming have the advantage of requiring little time or user effort to generate, so it is relatively easy to generate a suite of algorithms for varied terrain types, or to compare results from multiple algorithms for improved accuracy (Plesko et al. 2003).

  20. Design of Center-TRACON Automation System

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Davis, Thomas J.; Green, Steven

    1993-01-01

    A system for the automated management and control of terminal area traffic, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA Ames Research Center. In a cooperative program, NASA and FAA have efforts underway to install and evaluate the system at the Denver area and Dallas/Ft. Worth area air traffic control facilities. This paper will review CTAS architecture, and automation functions as well as the integration of CTAS into the existing operational system. CTAS consists of three types of integrated tools that provide computer-generated advisories for both en-route and terminal area controllers to guide them in managing and controlling arrival traffic efficiently. One tool, the Traffic Management Advisor (TMA), generates runway assignments, landing sequences and landing times for all arriving aircraft, including those originating from nearby feeder airports. TMA also assists in runway configuration control and flow management. Another tool, the Descent Advisor (DA), generates clearances for the en-route controllers handling arrival flows to metering gates. The DA's clearances ensure fuel-efficient and conflict free descents to the metering gates at specified crossing times. In the terminal area, the Final Approach Spacing Tool (FAST) provides heading and speed advisories that help controllers produce an accurately spaced flow of aircraft on the final approach course. Data bases consisting of several hundred aircraft performance models, airline preferred operational procedures, and a three dimensional wind model support the operation of CTAS. The first component of CTAS, the Traffic Management Advisor, is being evaluated at the Denver TRACON and the Denver Air Route Traffic Control Center. The second component, the Final Approach Spacing Tool, will be evaluated in several stages at the Dallas/Fort Worth Airport beginning in October 1993. An initial stage of the Descent Advisor tool is being prepared for testing at the Denver Center in late 1994. Operational evaluations of all three integrated CTAS tools are expected to begin at the two field sites in 1995.

  1. Clinical brain MR imaging prescriptions in Talairach space: technologist- and computer-driven methods.

    PubMed

    Weiss, Kenneth L; Pan, Hai; Storrs, Judd; Strub, William; Weiss, Jane L; Jia, Li; Eldevik, O Petter

    2003-05-01

    Variability in patient head positioning may yield substantial interstudy image variance in the clinical setting. We describe and test three-step technologist and computer-automated algorithms designed to image the brain in a standard reference system and reduce variance. Triple oblique axial images obtained parallel to the Talairach anterior commissure (AC)-posterior commissure (PC) plane were reviewed in a prospective analysis of 126 consecutive patients. Requisite roll, yaw, and pitch correction, as three authors determined independently and subsequently by consensus, were compared with the technologists' actual graphical prescriptions and those generated by a novel computer automated three-step (CATS) program. Automated pitch determinations generated with Statistical Parametric Mapping '99 (SPM'99) were also compared. Requisite pitch correction (15.2 degrees +/- 10.2 degrees ) far exceeded that for roll (-0.6 degrees +/- 3.7 degrees ) and yaw (-0.9 degrees +/- 4.7 degrees ) in terms of magnitude and variance (P <.001). Technologist and computer-generated prescriptions substantially reduced interpatient image variance with regard to roll (3.4 degrees and 3.9 degrees vs 13.5 degrees ), yaw (0.6 degrees and 2.5 degrees vs 22.3 degrees ), and pitch (28.6 degrees, 18.5 degrees with CATS, and 59.3 degrees with SPM'99 vs 104 degrees ). CATS performed worse than the technologists in yaw prescription, and it was equivalent in roll and pitch prescriptions. Talairach prescriptions better approximated standard CT canthomeatal angulations (9 degrees vs 24 degrees ) and provided more efficient brain coverage than that of routine axial imaging. Brain MR prescriptions corrected for direct roll, yaw, and Talairach AC-PC pitch can be readily achieved by trained technologists or automated computer algorithms. This ability will substantially reduce interpatient variance, allow better approximation of standard CT angulation, and yield more efficient brain coverage than that of routine clinical axial imaging.

  2. Automated audiometry using apple iOS-based application technology.

    PubMed

    Foulad, Allen; Bui, Peggy; Djalilian, Hamid

    2013-11-01

    The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.

  3. Satellite battery testing status

    NASA Astrophysics Data System (ADS)

    Haag, R.; Hall, S.

    1986-09-01

    Because of the large numbers of satellite cells currently being tested and anticipated at the Naval Weapons Support Center (NAVWPNSUPPCEN) Crane, Indiana, satellite cell testing is being integrated into the Battery Test Automation Project (BTAP). The BTAP, designed to meet the growing needs for battery testing at the NAVWPNSUPPCEN Crane, will consist of several Automated Test Stations (ATSs) which monitor batteries under test. Each ATS will interface with an Automation Network Controller (ANC) which will collect test data for reduction.

  4. Satellite battery testing status

    NASA Technical Reports Server (NTRS)

    Haag, R.; Hall, S.

    1986-01-01

    Because of the large numbers of satellite cells currently being tested and anticipated at the Naval Weapons Support Center (NAVWPNSUPPCEN) Crane, Indiana, satellite cell testing is being integrated into the Battery Test Automation Project (BTAP). The BTAP, designed to meet the growing needs for battery testing at the NAVWPNSUPPCEN Crane, will consist of several Automated Test Stations (ATSs) which monitor batteries under test. Each ATS will interface with an Automation Network Controller (ANC) which will collect test data for reduction.

  5. An ultraviolet-visible spectrophotometer automation system. Part 3: Program documentation

    NASA Astrophysics Data System (ADS)

    Roth, G. S.; Teuschler, J. M.; Budde, W. L.

    1982-07-01

    The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to process manually entered data for the analysis of chlorophyll or color. For each program of the UVVIS system, this document contains a program description, flowchart, variable dictionary, code listing, and symbol cross-reference table. Also included are descriptions of file structures and of routines common to all automated analyses. The programs are written in Data General extended BASIC, Revision 4.3, under the RDOS operating systems, Revision 6.2. The BASIC code has been enhanced for real-time data acquisition, which is accomplished by CALLS to assembly language subroutines. Two other related publications are 'An Ultraviolet-Visible Spectrophotometer Automation System - Part I Functional Specifications,' and 'An Ultraviolet-Visible Spectrophotometer Automation System - Part II User's Guide.'

  6. A Procedural Electroencephalogram Simulator for Evaluation of Anesthesia Monitors.

    PubMed

    Petersen, Christian Leth; Görges, Matthias; Massey, Roslyn; Dumont, Guy Albert; Ansermino, J Mark

    2016-11-01

    Recent research and advances in the automation of anesthesia are driving the need to better understand electroencephalogram (EEG)-based anesthesia end points and to test the performance of anesthesia monitors. This effort is currently limited by the need to collect raw EEG data directly from patients. A procedural method to synthesize EEG signals was implemented in a mobile software application. The application is capable of sending the simulated signal to an anesthesia depth of hypnosis monitor. Systematic sweeps of the simulator generate functional monitor response profiles reminiscent of how network analyzers are used to test electronic components. Three commercial anesthesia monitors (Entropy, NeuroSENSE, and BIS) were compared with this new technology, and significant response and feature variations between the monitor models were observed; this includes reproducible, nonmonotonic apparent multistate behavior and significant hysteresis at light levels of anesthesia. Anesthesia monitor response to a procedural simulator can reveal significant differences in internal signal processing algorithms. The ability to synthesize EEG signals at different anesthetic depths potentially provides a new method for systematically testing EEG-based monitors and automated anesthesia systems with all sensor hardware fully operational before human trials.

  7. Automated drumlin shape and volume estimation using high resolution LiDAR imagery (Curvature Based Relief Separation): A test from the Wadena Drumlin Field, Minnesota

    NASA Astrophysics Data System (ADS)

    Yu, Peter; Eyles, Nick; Sookhan, Shane

    2015-10-01

    Resolving the origin(s) of drumlins and related megaridges in areas of megascale glacial lineations (MSGL) left by paleo-ice sheets is critical to understanding how ancient ice sheets interacted with their sediment beds. MSGL is now linked with fast-flowing ice streams but there is a broad range of erosional and depositional models. Further progress is reliant on constraining fluxes of subglacial sediment at the ice sheet base which in turn is dependent on morphological data such as landform shape and elongation and most importantly landform volume. Past practice in determining shape has employed a broad range of geomorphological methods from strictly visualisation techniques to more complex semi-automated and automated drumlin extraction methods. This paper reviews and builds on currently available visualisation, semi-automated and automated extraction methods and presents a new, Curvature Based Relief Separation (CBRS) technique; for drumlin mapping. This uses curvature analysis to generate a base level from which topography can be normalized and drumlin volume can be derived. This methodology is tested using a high resolution (3 m) LiDAR elevation dataset from the Wadena Drumlin Field, Minnesota, USA, which was constructed by the Wadena Lobe of the Laurentide Ice Sheet ca. 20,000 years ago and which as a whole contains 2000 drumlins across an area of 7500 km2. This analysis demonstrates that CBRS provides an objective and robust procedure for automated drumlin extraction. There is strong agreement with manually selected landforms but the method is also capable of resolving features that were not detectable manually thereby considerably expanding the known population of streamlined landforms. CBRS provides an effective automatic method for visualisation of large areas of the streamlined beds of former ice sheets and for modelling sediment fluxes below ice sheets.

  8. Next Generation Loading System for Detonators and Primers

    DTIC Science & Technology

    Designed , fabricated and installed next generation tooling to provide additional manufacturing capabilities for new detonators and other small...prototype munitions on automated, semi-automated and manual machines. Lead design effort, procured and installed a primary explosive Drying Oven for a pilot...facility. Designed , fabricated and installed a Primary Explosives Waste Treatment System in a pilot environmental processing facility. Designed

  9. Substructure analysis techniques and automation. [to eliminate logistical data handling and generation chores

    NASA Technical Reports Server (NTRS)

    Hennrich, C. W.; Konrath, E. J., Jr.

    1973-01-01

    A basic automated substructure analysis capability for NASTRAN is presented which eliminates most of the logistical data handling and generation chores that are currently associated with the method. Rigid formats are proposed which will accomplish this using three new modules, all of which can be added to level 16 with a relatively small effort.

  10. What's New in the Library Automation Arena?

    ERIC Educational Resources Information Center

    Breeding, Marshall

    1998-01-01

    Reviews trends in library automation based on vendors at the 1998 American Library Association Annual Conference. Discusses the major industry trend, a move from host-based computer systems to the new generation of client/server, object-oriented, open systems-based automation. Includes a summary of developments for 26 vendors. (LRW)

  11. Automating Document Delivery: A Conference Report.

    ERIC Educational Resources Information Center

    Ensor, Pat

    1992-01-01

    Describes presentations made at a forum on automation, interlibrary loan (ILL), and document delivery sponsored by the Houston Area Library Consortium. Highlights include access versus ownership; software for ILL; fee-based services; automated management systems for ILL; and electronic mail and online systems for end-user-generated ILL requests.…

  12. Comparison of Automated Scoring Methods for a Computerized Performance Assessment of Clinical Judgment

    ERIC Educational Resources Information Center

    Harik, Polina; Baldwin, Peter; Clauser, Brian

    2013-01-01

    Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…

  13. Robo-Lector – a novel platform for automated high-throughput cultivations in microtiter plates with high information content

    PubMed Central

    Huber, Robert; Ritter, Daniel; Hering, Till; Hillmer, Anne-Kathrin; Kensy, Frank; Müller, Carsten; Wang, Le; Büchs, Jochen

    2009-01-01

    Background In industry and academic research, there is an increasing demand for flexible automated microfermentation platforms with advanced sensing technology. However, up to now, conventional platforms cannot generate continuous data in high-throughput cultivations, in particular for monitoring biomass and fluorescent proteins. Furthermore, microfermentation platforms are needed that can easily combine cost-effective, disposable microbioreactors with downstream processing and analytical assays. Results To meet this demand, a novel automated microfermentation platform consisting of a BioLector and a liquid-handling robot (Robo-Lector) was sucessfully built and tested. The BioLector provides a cultivation system that is able to permanently monitor microbial growth and the fluorescence of reporter proteins under defined conditions in microtiter plates. Three examplary methods were programed on the Robo-Lector platform to study in detail high-throughput cultivation processes and especially recombinant protein expression. The host/vector system E. coli BL21(DE3) pRhotHi-2-EcFbFP, expressing the fluorescence protein EcFbFP, was hereby investigated. With the method 'induction profiling' it was possible to conduct 96 different induction experiments (varying inducer concentrations from 0 to 1.5 mM IPTG at 8 different induction times) simultaneously in an automated way. The method 'biomass-specific induction' allowed to automatically induce cultures with different growth kinetics in a microtiter plate at the same biomass concentration, which resulted in a relative standard deviation of the EcFbFP production of only ± 7%. The third method 'biomass-specific replication' enabled to generate equal initial biomass concentrations in main cultures from precultures with different growth kinetics. This was realized by automatically transferring an appropiate inoculum volume from the different preculture microtiter wells to respective wells of the main culture plate, where subsequently similar growth kinetics could be obtained. Conclusion The Robo-Lector generates extensive kinetic data in high-throughput cultivations, particularly for biomass and fluorescence protein formation. Based on the non-invasive on-line-monitoring signals, actions of the liquid-handling robot can easily be triggered. This interaction between the robot and the BioLector (Robo-Lector) combines high-content data generation with systematic high-throughput experimentation in an automated fashion, offering new possibilities to study biological production systems. The presented platform uses a standard liquid-handling workstation with widespread automation possibilities. Thus, high-throughput cultivations can now be combined with small-scale downstream processing techniques and analytical assays. Ultimately, this novel versatile platform can accelerate and intensify research and development in the field of systems biology as well as modelling and bioprocess optimization. PMID:19646274

  14. Agile Acceptance Test-Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software.

    PubMed

    Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-04-13

    Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.

  15. Automated branching pattern report generation for laparoscopic surgery assistance

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Matsuzaki, Tetsuro; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2015-05-01

    This paper presents a method for generating branching pattern reports of abdominal blood vessels for laparoscopic gastrectomy. In gastrectomy, it is very important to understand branching structure of abdominal arteries and veins, which feed and drain specific abdominal organs including the stomach, the liver and the pancreas. In the real clinical stage, a surgeon creates a diagnostic report of the patient anatomy. This report summarizes the branching patterns of the blood vessels related to the stomach. The surgeon decides actual operative procedure. This paper shows an automated method to generate a branching pattern report for abdominal blood vessels based on automated anatomical labeling. The report contains 3D rendering showing important blood vessels and descriptions of branching patterns of each vessel. We have applied this method for fifty cases of 3D abdominal CT scans and confirmed the proposed method can automatically generate branching pattern reports of abdominal arteries.

  16. NextGen Operational Improvements: Will they Improve Human Performance

    NASA Technical Reports Server (NTRS)

    Beard, Bettina L.; Johnston, James C.; Holbrook, Jon

    2013-01-01

    Modernization of the National Airspace System depends critically on the development of advanced technology, including cutting-edge automation, controller decision-support tools and integrated on-demand information. The Next Generation Air Transportation System national plan envisions air traffic control tower automation that proposes solutions for seven problems: 1) departure metering, 2) taxi routing, 3) taxi and runway scheduling, 4) departure runway assignments, 5) departure flow management, 6) integrated arrival and departure scheduling and 7) runway configuration management. Government, academia and industry are simultaneously pursuing the development of these tools. For each tool, the development process typically begins by assessing its potential benefits, and then progresses to designing preliminary versions of the tool, followed by testing the tool's strengths and weaknesses using computational modeling, human-in-the-loop simulation and/or field tests. We compiled the literature, evaluated the methodological rigor of the studies and served as referee for partisan conclusions that were sometimes overly optimistic. Here we provide the results of this review.

  17. An Experimental Study of the Influence of in-Plane Fiber Waviness on Unidirectional Laminates Tensile Properties

    NASA Astrophysics Data System (ADS)

    Zhao, Cong; Xiao, Jun; Li, Yong; Chu, Qiyi; Xu, Ting; Wang, Bendong

    2017-12-01

    As one of the most common process induced defects of automated fiber placement, in-plane fiber waviness and its influences on mechanical properties of fiber reinforced composite lack experimental studies. In this paper, a new approach to prepare the test specimen with in-plane fiber waviness is proposed in consideration of the mismatch between the current test standard and actual fiber trajectory. Based on the generation mechanism of in-plane fiber waviness during automated fiber placement, the magnitude of in-plane fiber waviness is characterized by axial compressive strain of prepreg tow. The elastic constants and tensile strength of unidirectional laminates with in-plane fiber waviness are calculated by off-axis and maximum stress theory. Experimental results show that the tensile properties infade dramatically with increasing magnitude of the waviness, in good agreement with theoretical analyses. When prepreg tow compressive strain reaches 1.2%, the longitudinal tensile modulus and strength of unidirectional laminate decreased by 25.5% and 57.7%, respectively.

  18. Metabolic modeling of dynamic 13C NMR isotopomer data in the brain in vivo: Fast screening of metabolic models using automated generation of differential equations

    PubMed Central

    Tiret, Brice; Shestov, Alexander A.; Valette, Julien; Henry, Pierre-Gilles

    2017-01-01

    Most current brain metabolic models are not capable of taking into account the dynamic isotopomer information available from fine structure multiplets in 13C spectra, due to the difficulty of implementing such models. Here we present a new approach that allows automatic implementation of multi-compartment metabolic models capable of fitting any number of 13C isotopomer curves in the brain. The new automated approach also makes it possible to quickly modify and test new models to best describe the experimental data. We demonstrate the power of the new approach by testing the effect of adding separate pyruvate pools in astrocytes and neurons, and adding a vesicular neuronal glutamate pool. Including both changes reduced the global fit residual by half and pointed to dilution of label prior to entry into the astrocytic TCA cycle as the main source of glutamine dilution. The glutamate-glutamine cycle rate was particularly sensitive to changes in the model. PMID:26553273

  19. The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.

    2011-01-01

    A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.

  20. TU-AB-201-02: An Automated Treatment Plan Quality Assurance Program for Tandem and Ovoid High Dose-Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, J; Shi, F; Hrycushko, B

    2015-06-15

    Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less

  1. A universal algorithm for an improved finite element mesh generation Mesh quality assessment in comparison to former automated mesh-generators and an analytic model.

    PubMed

    Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid

    2005-06-01

    The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.

  2. Benchmarking and performance analysis of the CM-2. [SIMD computer

    NASA Technical Reports Server (NTRS)

    Myers, David W.; Adams, George B., II

    1988-01-01

    A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.

  3. Automated generation of weld path trajectories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sizemore, John M.; Hinman-Sweeney, Elaine Marie; Ames, Arlo Leroy

    2003-06-01

    AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most shipmore » structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Shipbuilding Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.« less

  4. 76 FR 19468 - Amended Certification Regarding Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-07

    ... Known As ATW Automation, Inc., Livonia Michigan TA-W-72,075A Assembly & Test Worldwide, Inc., Currently... Saginaw, Michigan locations of Assembly & Test Worldwide, Inc., are currently known as ATW Automation, Inc... Automation, Inc., Livonia, Michigan (TA-W-72,075); Assembly & Test Worldwide, Inc., currently known as ATW...

  5. NASA Tech Briefs, August 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Program Merges SAR Data on Terrain and Vegetation Heights; Using G(exp 4)FETs as a Data Router for In-Plane Crossing of Signal Paths; Two Algorithms for Processing Electronic Nose Data; Radiation-Tolerant Dual Data Bus; General-Purpose Front End for Real-Time Data Processing; Nanocomposite Photoelectrochemical Cells; Ultracapacitor-Powered Cordless Drill, Cumulative Timers for Microprocessors; Photocatalytic/Magnetic Composite Particles; Separation and Sealing of a Sample Container Using Brazing; Automated Aerial Refueling Hitches a Ride on AFF; Cobra Probes Containing Replaceable Thermocouples; High-Speed Noninvasive Eye-Tracking System; Detergent-Specific Membrane Protein Crystallization Screens; Evaporation-Cooled Protective Suits for Firefighters; Plasmonic Antenna Coupling for QWIPs; Electronic Tongue Containing Redox and Conductivity Sensors; Improved Heat-Stress Algorithm; A Method of Partly Automated Testing of Software; Rover Wheel-Actuated Tool Interface; and Second-Generation Electronic Nose.

  6. A prototype to automate the video subsystem routing for the video distribution subsystem of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Betz, Jessie M. Bethly

    1993-12-01

    The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.

  7. Automation of Some Operations of a Wind Tunnel Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Buggele, Alvin E.

    1996-01-01

    Artificial neural networks were used successfully to sequence operations in a small, recently modernized, supersonic wind tunnel at NASA-Lewis Research Center. The neural nets generated correct estimates of shadowgraph patterns, pressure sensor readings and mach numbers for conditions occurring shortly after startup and extending to fully developed flow. Artificial neural networks were trained and tested for estimating: sensor readings from shadowgraph patterns, shadowgraph patterns from shadowgraph patterns and sensor readings from sensor readings. The 3.81 by 10 in. (0.0968 by 0.254 m) tunnel was operated with its mach 2.0 nozzle, and shadowgraph was recorded near the nozzle exit. These results support the thesis that artificial neural networks can be combined with current workstation technology to automate wind tunnel operations.

  8. Automated Interpretation of Blood Culture Gram Stains by Use of a Deep Convolutional Neural Network.

    PubMed

    Smith, Kenneth P; Kang, Anthony D; Kirby, James E

    2018-03-01

    Microscopic interpretation of stained smears is one of the most operator-dependent and time-intensive activities in the clinical microbiology laboratory. Here, we investigated application of an automated image acquisition and convolutional neural network (CNN)-based approach for automated Gram stain classification. Using an automated microscopy platform, uncoverslipped slides were scanned with a 40× dry objective, generating images of sufficient resolution for interpretation. We collected 25,488 images from positive blood culture Gram stains prepared during routine clinical workup. These images were used to generate 100,213 crops containing Gram-positive cocci in clusters, Gram-positive cocci in chains/pairs, Gram-negative rods, or background (no cells). These categories were targeted for proof-of-concept development as they are associated with the majority of bloodstream infections. Our CNN model achieved a classification accuracy of 94.9% on a test set of image crops. Receiver operating characteristic (ROC) curve analysis indicated a robust ability to differentiate between categories with an area under the curve of >0.98 for each. After training and validation, we applied the classification algorithm to new images collected from 189 whole slides without human intervention. Sensitivity and specificity were 98.4% and 75.0% for Gram-positive cocci in chains and pairs, 93.2% and 97.2% for Gram-positive cocci in clusters, and 96.3% and 98.1% for Gram-negative rods. Taken together, our data support a proof of concept for a fully automated classification methodology for blood-culture Gram stains. Importantly, the algorithm was highly adept at identifying image crops with organisms and could be used to present prescreened, classified crops to technologists to accelerate smear review. This concept could potentially be extended to all Gram stain interpretive activities in the clinical laboratory. Copyright © 2018 American Society for Microbiology.

  9. Automated inhaled nitric oxide alerts for adult extracorporeal membrane oxygenation patient identification.

    PubMed

    Belenkiy, Slava M; Batchinsky, Andriy I; Park, Timothy S; Luellen, David E; Serio-Melvin, Maria L; Cancio, Leopoldo C; Pamplin, Jeremy C; Chung, Kevin K; Salinas, Josè; Cannon, Jeremy W

    2014-09-01

    Recently, automated alerts have been used to identify patients with respiratory failure based on set criteria, which can be gleaned from the electronic medical record (EMR). Such an approach may also be useful for identifying patients with severe adult respiratory distress syndrome (ARDS) who may benefit from extracorporeal membrane oxygenation (ECMO). Inhaled nitric oxide (iNO) is a common rescue therapy for severe ARDS which can be easily tracked in the EMR, and some patients started on iNO may have indications for initiating ECMO. This case series summarizes our experience with using automated electronic alerts for ECMO team activation focused particularly on an alert triggered by the initiation of iNO. After a brief trial evaluation, our Smart Alert system generated an automated page and e-mail alert to ECMO team members whenever a nonzero value for iNO appeared in the respiratory care section of our EMR. If iNO was initiated for severe respiratory failure, a detailed evaluation by the ECMO team determined if ECMO was indicated. For those patients managed with ECMO, we tabulated baseline characteristics, indication for ECMO, and outcomes. From September 2012 to July 2013, 45 iNO alerts were generated on 42 unique patients. Six patients (14%) met criteria for ECMO. Of these, four were identified exclusively by the iNO alert. At the time of the alert, the median PaO₂-to-FIO₂ ratio was 64 mm Hg (range, 55-107 mm Hg), the median age-adjusted oxygenation index was 73 (range, 51-96), and the median Murray score was 3.4 (range, 3-3.75), indicating severe respiratory failure. Median time from iNO alert to ECMO initiation was 81 hours (range, -2-292 hours). Survival to hospital discharge was 83% in those managed with ECMO. Automated alerts may be useful for identifying patients with severe ARDS who may be ECMO candidates. Diagnostic test, level V.

  10. Laser materials processing of complex components: from reverse engineering via automated beam path generation to short process development cycles

    NASA Astrophysics Data System (ADS)

    Görgl, Richard; Brandstätter, Elmar

    2017-01-01

    The article presents an overview of what is possible nowadays in the field of laser materials processing. The state of the art in the complete process chain is shown, starting with the generation of a specific components CAD data and continuing with the automated motion path generation for the laser head carried by a CNC or robot system. Application examples from laser cladding and laser-based additive manufacturing are given.

  11. Rapid Automated Antimicrobial Susceptibility Testing of Streptococcus pneumoniae by Use of the bioMerieux VITEK 2

    PubMed Central

    Jorgensen, James H.; Barry, Arthur L.; Traczewski, M. M.; Sahm, Daniel F.; McElmeel, M. Leticia; Crawford, Sharon A.

    2000-01-01

    The VITEK 2 is a new automated instrument for rapid organism identification and susceptibility testing. It has the capability of performing rapid susceptibility testing of Streptococcus pneumoniae with specially configured cards that contain enriched growth medium and antimicrobial agents relevant for this organism. The present study compared the results of testing of a group of 53 challenge strains of pneumococci with known resistance properties and a collection of clinical isolates examined in two study phases with a total of 402 and 416 isolates, respectively, with a prototype of the VITEK 2. Testing was conducted in three geographically separate laboratories; the challenge collection was tested by all three laboratories, and the unique clinical isolates were tested separately by the individual laboratories. The VITEK 2 results of tests with 10 antimicrobial agents were compared to the results generated by the National Committee for Clinical Laboratory Standards reference broth microdilution MIC test method. Excellent interlaboratory agreement was observed with the challenge strains. The overall agreement within a single twofold dilution of MICs defined by the VITEK 2 and reference method with the clinical isolates was 96.3%, although there were a number of off-scale MICs that could not be compared. The best agreement with the clinical isolates was achieved with ofloxacin and chloramphenicol (100%), and the lowest level of agreement among those drugs with sufficient on-scale MICs occurred with trimethoprim-sulfamethoxazole (89.7%). Overall there were 1.3% very major, 6.6% minor, and no major interpretive category errors encountered with the clinical isolates, although >80% of the minor interpretive errors involved only a single log2 dilution difference. The mean time for generation of susceptibility results with the clinical isolates was 8.1 h. The VITEK 2 provided rapid, reliable susceptibility category determinations with both the challenge and clinical isolates examined in this study. PMID:10921932

  12. Policy-based secure communication with automatic key management for industrial control and automation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chernoguzov, Alexander; Markham, Thomas R.; Haridas, Harshal S.

    A method includes generating at least one access vector associated with a specified device in an industrial process control and automation system. The specified device has one of multiple device roles. The at least one access vector is generated based on one or more communication policies defining communications between one or more pairs of devices roles in the industrial process control and automation system, where each pair of device roles includes the device role of the specified device. The method also includes providing the at least one access vector to at least one of the specified device and one ormore » more other devices in the industrial process control and automation system in order to control communications to or from the specified device.« less

  13. An industrial engineering approach to laboratory automation for high throughput screening

    PubMed Central

    Menke, Karl C.

    2000-01-01

    Across the pharmaceutical industry, there are a variety of approaches to laboratory automation for high throughput screening. At Sphinx Pharmaceuticals, the principles of industrial engineering have been applied to systematically identify and develop those automated solutions that provide the greatest value to the scientists engaged in lead generation. PMID:18924701

  14. The Automation Inventory of Research Libraries, 1986.

    ERIC Educational Resources Information Center

    Sitts, Maxine K., Ed.

    Based on information and data from 113 Association of Research Libraries (ARL) members that were gathered and updated between March and August 1986, this publication was generated from a database developed by ARL to provide timely, comparable information about the extent and nature of automation within the ARL community. Trends in automation are…

  15. Development of a Prototype Automation Simulation Scenario Generator for Air Traffic Management Software Simulations

    NASA Technical Reports Server (NTRS)

    Khambatta, Cyrus F.

    2007-01-01

    A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.

  16. Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation

    NASA Technical Reports Server (NTRS)

    Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat

    2011-01-01

    The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.

  17. Solid polymer electrolyte water electrolysis system development. [to generate oxygen for manned space station applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Solid polymer electrolyte technology used in a water electrolysis system (WES) to generate oxygen and hydrogen for manned space station applications was investigated. A four-man rated, low pressure breadboard water electrolysis system with the necessary instrumentation and controls was fabricated and tested. A six man rated, high pressure, high temperature, advanced preprototype WES was developed. This configuration included the design and development of an advanced water electrolysis module, capable of operation at 400 psig and 200 F, and a dynamic phase separator/pump in place of a passive phase separator design. Evaluation of this system demonstrated the goal of safe, unattended automated operation at high pressure and high temperature with an accumulated gas generation time of over 1000 hours.

  18. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  19. Automated calibration of laser spectrometer measurements of δ18 O and δ2 H values in water vapour using a Dew Point Generator.

    PubMed

    Munksgaard, Niels C; Cheesman, Alexander W; Gray-Spence, Andrew; Cernusak, Lucas A; Bird, Michael I

    2018-06-30

    Continuous measurement of stable O and H isotope compositions in water vapour requires automated calibration for remote field deployments. We developed a new low-cost device for calibration of both water vapour mole fraction and isotope composition. We coupled a commercially available dew point generator (DPG) to a laser spectrometer and developed hardware for water and air handling along with software for automated operation and data processing. We characterised isotopic fractionation in the DPG, conducted a field test and assessed the influence of critical parameters on the performance of the device. An analysis time of 1 hour was sufficient to achieve memory-free analysis of two water vapour standards and the δ 18 O and δ 2 H values were found to be independent of water vapour concentration over a range of ≈20,000-33,000 ppm. The reproducibility of the standard vapours over a 10-day period was better than 0.14 ‰ and 0.75 ‰ for δ 18 O and δ 2 H values, respectively (1 σ, n = 11) prior to drift correction and calibration. The analytical accuracy was confirmed by the analysis of a third independent vapour standard. The DPG distillation process requires that isotope calibration takes account of DPG temperature, analysis time, injected water volume and air flow rate. The automated calibration system provides high accuracy and precision and is a robust, cost-effective option for long-term field measurements of water vapour isotopes. The necessary modifications to the DPG are minor and easily reversible. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Automated Content Detection for Cassini Images

    NASA Astrophysics Data System (ADS)

    Stanboli, A.; Bue, B.; Wagstaff, K.; Altinok, A.

    2017-06-01

    NASA missions generate numerous images ever organized in increasingly large archives. Image archives are currently not searchable by image content. We present an automated content detection prototype that can enable content search.

  1. Promon's participation in the Brasilsat program: first & second generations

    NASA Astrophysics Data System (ADS)

    Depaiva, Ricardo N.

    This paper presents an overview of the Brasilsat program, space and ground segments, developed by Hughes and Promon. Promon is a Brazilian engineering company that has been actively participating in the Brasilsat Satellite Telecommunications Program since its beginning. During the first generation, as subcontractor of the Spar/Hughes/SED consortium, Promon had a significant participation in the site installation of the Ground Segment, including the antennas. During the second generation, as partner of a consortium with Hughes, Promon participated in the upgrade of Brasilsat's Ground Segment systems: the TT&C (TCR1, TCR2, and SCC) and the COCC (Communications and Operations Control Center). This upgrade consisted of the design and development of hardware and software to support the second generation requirements, followed by integration and tests, factory acceptance tests, transport to site, site installation, site acceptance tests and warranty support. The upgraded systems are distributed over four sites with remote access to the main ground station. The solutions adopted provide a high level of automation, and easy operator interaction. The hardware and software technologies were selected to provide the flexibility to incorporate new technologies and services from the demanding satellite telecommunications market.

  2. Towards an automated intelligence product generation capability

    NASA Astrophysics Data System (ADS)

    Smith, Alison M.; Hawes, Timothy W.; Nolan, James J.

    2015-05-01

    Creating intelligence information products is a time consuming and difficult process for analysts faced with identifying key pieces of information relevant to a complex set of information requirements. Complicating matters, these key pieces of information exist in multiple modalities scattered across data stores, buried in huge volumes of data. This results in the current predicament analysts find themselves; information retrieval and management consumes huge amounts of time that could be better spent performing analysis. The persistent growth in data accumulation rates will only increase the amount of time spent on these tasks without a significant advance in automated solutions for information product generation. We present a product generation tool, Automated PrOduct Generation and Enrichment (APOGEE), which aims to automate the information product creation process in order to shift the bulk of the analysts' effort from data discovery and management to analysis. APOGEE discovers relevant text, imagery, video, and audio for inclusion in information products using semantic and statistical models of unstructured content. APOGEEs mixed-initiative interface, supported by highly responsive backend mechanisms, allows analysts to dynamically control the product generation process ensuring a maximally relevant result. The combination of these capabilities results in significant reductions in the time it takes analysts to produce information products while helping to increase the overall coverage. Through evaluation with a domain expert, APOGEE has been shown the potential to cut down the time for product generation by 20x. The result is a flexible end-to-end system that can be rapidly deployed in new operational settings.

  3. The development of a Flight Test Engineer's Workstation for the Automated Flight Test Management System

    NASA Technical Reports Server (NTRS)

    Tartt, David M.; Hewett, Marle D.; Duke, Eugene L.; Cooper, James A.; Brumbaugh, Randal W.

    1989-01-01

    The Automated Flight Test Management System (ATMS) is being developed as part of the NASA Aircraft Automation Program. This program focuses on the application of interdisciplinary state-of-the-art technology in artificial intelligence, control theory, and systems methodology to problems of operating and flight testing high-performance aircraft. The development of a Flight Test Engineer's Workstation (FTEWS) is presented, with a detailed description of the system, technical details, and future planned developments. The goal of the FTEWS is to provide flight test engineers and project officers with an automated computer environment for planning, scheduling, and performing flight test programs. The FTEWS system is an outgrowth of the development of ATMS and is an implementation of a component of ATMS on SUN workstations.

  4. Overcoming Barriers to Technology Adoption in Small Manufacturing Enterprises (SMEs)

    DTIC Science & Technology

    2003-06-01

    automates quote-generation, order - processing workflow management, perform- ance analysis, and accounting functions. Ultimately, it will enable Magdic...that Magdic imple- ment an MES instead. The MES, in addition to solving the problem of document manage- ment, would automate quote-generation, order ... processing , workflow management, perform- ance analysis, and accounting functions. To help Magdic personnel learn about the MES, TIDE personnel provided

  5. Automated Report Generation for Research Data Repositories: From i2b2 to PDF.

    PubMed

    Thiemann, Volker S; Xu, Tingyan; Röhrig, Rainer; Majeed, Raphael W

    2017-01-01

    We developed an automated toolchain to generate reports of i2b2 data. It is based on free open source software and runs on a Java Application Server. It is sucessfully used in an ED registry project. The solution is highly configurable and portable to other projects based on i2b2 or compatible factual data sources.

  6. Agile Acceptance Test–Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software

    PubMed Central

    Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-01-01

    Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922

  7. Automated processing of endoscopic surgical instruments.

    PubMed

    Roth, K; Sieber, J P; Schrimm, H; Heeg, P; Buess, G

    1994-10-01

    This paper deals with the requirements for automated processing of endoscopic surgical instruments. After a brief analysis of the current problems, solutions are discussed. Test-procedures have been developed to validate the automated processing, so that the cleaning results are guaranteed and reproducable. Also a device for testing and cleaning was designed together with Netzsch Newamatic and PCI, called TC-MIC, to automate processing and reduce manual work.

  8. Evaluation of an automated microplate technique in the Galileo system for ABO and Rh(D) blood grouping.

    PubMed

    Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin

    2014-01-01

    A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.

  9. Using Automated Scores of Student Essays to Support Teacher Guidance in Classroom Inquiry

    ERIC Educational Resources Information Center

    Gerard, Libby F.; Linn, Marcia C.

    2016-01-01

    Computer scoring of student written essays about an inquiry topic can be used to diagnose student progress both to alert teachers to struggling students and to generate automated guidance. We identify promising ways for teachers to add value to automated guidance to improve student learning. Three teachers from two schools and their 386 students…

  10. FY16 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2016-09-30

    The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less

  11. Low power arcjet thruster pulse ignition

    NASA Technical Reports Server (NTRS)

    Sarmiento, Charles J.; Gruber, Robert P.

    1987-01-01

    An investigation of the pulse ignition characteristics of a 1 kW class arcjet using an inductive energy storage pulse generator with a pulse width modulated power converter identified several thruster and pulse generator parameters that influence breakdown voltage including pulse generator rate of voltage rise. This work was conducted with an arcjet tested on hydrogen-nitrogen gas mixtures to simulate fully decomposed hydrazine. Over all ranges of thruster and pulser parameters investigated, the mean breakdown voltages varied from 1.4 to 2.7 kV. Ignition tests at elevated thruster temperatures under certain conditions revealed occasional breakdowns to thruster voltages higher than the power converter output voltage. These post breakdown discharges sometimes failed to transition to the lower voltage arc discharge mode and the thruster would not ignite. Under the same conditions, a transition to the arc mode would occur for a subsequent pulse and the thruster would ignite. An automated 11 600 cycle starting and transition to steady state test demonstrated ignition on the first pulse and required application of a second pulse only two times to initiate breakdown.

  12. Progress towards Continental River Dynamics modeling

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Wei; Zheng, Xing; Liu, Frank; Maidment, Daivd; Hodges, Ben

    2017-04-01

    The high-resolution National Water Model (NWM), launched by U.S. National Oceanic and Atmospheric Administration (NOAA) in August 2016, has shown it is possible to provide real-time flow prediction in rivers and streams across the entire continental United States. The next step for continental-scale modeling is moving from reduced physics (e.g. Muskingum-Cunge) to full dynamic modeling with the Saint-Venant equations. The Simulation Program for River Networks (SPRNT) provides a computational approach for the Saint-Venant equations, but obtaining sufficient channel bathymetric data and hydraulic roughness is seen as a critical challenge. However, recent work has shown the Height Above Nearest Drainage (HAND) method can be applied with the National Elevation Dataset (NED) to provide automated estimation of effective channel bathymetry suitable for large-scale hydraulic simulations. The present work examines the use of SPRNT with the National Hydrography Dataset (NHD) and HAND-derived bathymetry for automated generation of rating curves that can be compared to existing data. The approach can, in theory, be applied to every stream reach in the NHD and thus provide flood guidance where none is available. To test this idea we generated 2000+ rating curves in two catchments in Texas and Alabama (USA). Field data from the USGS and flood records from an Austin, Texas flood in May 2015 were used as validation. Large-scale implementation of this idea requires addressing several critical difficulties associated with numerical instabilities, including ill-posed boundary conditions generated in automated model linkages and inconsistencies in the river geometry. A key to future progress is identifying efficient approaches to isolate numerical instability contributors in a large time-space varying solution. This research was supported in part by the National Science Foundation under grant number CCF-1331610.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.

    Sensor-based operation of autonomous robots in unstructured and/or outdoor environments has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. An approach. which we have named the {open_quotes}Fuzzy Behaviorist Approach{close_quotes} (FBA) is proposed in an attempt to remedy some of these difficulties. This approach is based on the representation of the system`s uncertainties using Fuzzy Set Theory-basedmore » approximations and on the representation of the reasoning and control schemes as sets of elemental behaviors. Using the FBA, a formalism for rule base development and an automated generator of fuzzy rules have been developed. This automated system can automatically construct the set of membership functions corresponding to fuzzy behaviors. Once these have been expressed in qualitative terms by the user. The system also checks for completeness of the rule base and for non-redundancy of the rules (which has traditionally been a major hurdle in rule base development). Two major conceptual features, the suppression and inhibition mechanisms which allow to express a dominance between behaviors are discussed in detail. Some experimental results obtained with the automated fuzzy, rule generator applied to the domain of sensor-based navigation in aprion unknown environments. using one of our autonomous test-bed robots as well as a real car in outdoor environments, are then reviewed and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using the {open_quotes}Fuzzy Behaviorist{close_quotes} concepts.« less

  14. Domain specific software architectures: Command and control

    NASA Technical Reports Server (NTRS)

    Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave

    1992-01-01

    GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.

  15. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  16. Comparison of visual microscopic and computer-automated fluorescence detection of rabies virus neutralizing antibodies.

    PubMed

    Péharpré, D; Cliquet, F; Sagné, E; Renders, C; Costy, F; Aubert, M

    1999-07-01

    The rapid fluorescent focus inhibition test (RFFIT) and the fluorescent antibody virus neutralization test (FAVNT) are both diagnostic tests for determining levels of rabies neutralizing antibodies. An automated method for determining fluorescence has been implemented to reduce the work time required for fluorescent visual microscopic observations. The automated method offers several advantages over conventional visual observation, such as the ability to rapidly test many samples. The antibody titers obtained with automated techniques were similar to those obtained with both the RFFIT (n = 165, r = 0.93, P < 0.001) and the FAVNT (n = 52, r = 0.99, P < 0.001).

  17. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  18. A novel automated alternating current biosusceptometry method to characterization of controlled-release magnetic floating tablets of metronidazole.

    PubMed

    Ferrari, Priscileila Colerato; dos Santos Grossklauss, Dany Bruno Borella; Alvarez, Matheus; Paixão, Fabiano Carlos; Andreis, Uilian; Crispim, Alexandre Giordano; de Castro, Ana Dóris; Evangelista, Raul Cesar; de Arruda Miranda, José Ricardo

    2014-08-01

    Alternating Current Biosusceptometry is a magnetically method used to characterize drug delivery systems. This work presents a system composed by an automated ACB sensor to acquire magnetic images of floating tablets. The purpose of this study was to use an automated Alternating Current Biosusceptometry (ACB) to characterize magnetic floating tablets for controlled drug delivery. Floating tablets were prepared with hydroxypropyl methylcellulose (HPMC) as hydrophilic gel material, sodium bicarbonate as gas-generating agent and ferrite as magnetic marker. ACB was used to characterize the floating lag time and the tablet hydration rate, by quantification of the magnetic images to magnetic area. Besides the buoyancy, the floating tablets were evaluated for weight uniformity, hardness, swelling and in vitro drug release. The optimized tablets were prepared with equal amounts of HPMC and ferrite, and began to float within 4 min, maintaining the flotation during more than 24 h. The data of all physical parameters lied within the pharmacopeial limits. Drug release at 24 h was about 40%. The ACB results showed that this study provided a new approach for in vitro investigation of controlled-release dosage forms. Moreover, using automated ACB will also be possible to test these parameters in humans allowing to establish an in vitro.in vivo correlation (IVIVC).

  19. Towards Robot Scientists for autonomous scientific discovery

    PubMed Central

    2010-01-01

    We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist. PMID:20119518

  20. Towards Robot Scientists for autonomous scientific discovery.

    PubMed

    Sparkes, Andrew; Aubrey, Wayne; Byrne, Emma; Clare, Amanda; Khan, Muhammed N; Liakata, Maria; Markham, Magdalena; Rowland, Jem; Soldatova, Larisa N; Whelan, Kenneth E; Young, Michael; King, Ross D

    2010-01-04

    We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist.

  1. Simplified Automated Image Analysis for Detection and Phenotyping of Mycobacterium tuberculosis on Porous Supports by Monitoring Growing Microcolonies

    PubMed Central

    den Hertog, Alice L.; Visser, Dennis W.; Ingham, Colin J.; Fey, Frank H. A. G.; Klatser, Paul R.; Anthony, Richard M.

    2010-01-01

    Background Even with the advent of nucleic acid (NA) amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS), as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. Methods Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO) supports. Repeated imaging during colony growth greatly simplifies “computer vision” and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. Significance Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation. PMID:20544033

  2. Automated smartphone audiometry: Validation of a word recognition test app.

    PubMed

    Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J

    2018-03-01

    Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design

    NASA Technical Reports Server (NTRS)

    Li, Wu; Robinson, Jay

    2016-01-01

    This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.

  4. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  5. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  6. Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.

    PubMed

    Zachariah, S G; Sanders, J E; Turkiyyah, G M

    1996-06-01

    A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.

  7. Using Pareto points for model identification in predictive toxicology

    PubMed Central

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  8. Summary of flat-plate solar array project documentation. Abstracts of published documents, 1975 to June 1982

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Technologies that will enable the private sector to manufacture and widely use photovoltaic systems for the generation of electricity in residential, commercial, industrial, and government applications at a cost per watt that is competitive with other means is investigated. Silicon refinement processes, advanced silicon sheet growth techniques, solar cell development, encapsulation, automated fabrication process technology, advanced module/array design, and module/array test and evaluation techniques are developed.

  9. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  10. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  11. Development of a High-Fidelity Simulation Environment for Shadow-Mode Assessments of Air Traffic Concepts

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Robinson, John E.; Lai, Chok Fung

    2017-01-01

    This paper will describe the purpose, architecture, and implementation of a gate-to-gate, high-fidelity air traffic simulation environment called the Shadow Mode Assessment using Realistic Technologies for the National Airspace System (SMART-NAS) Test Bed.The overarching purpose of the SMART-NAS Test Bed (SNTB) is to conduct high-fidelity, real-time, human-in-the-loop and automation-in-the-loop simulations of current and proposed future air traffic concepts for the Next Generation Air Transportation System of the United States, called NextGen. SNTB is intended to enable simulations that are currently impractical or impossible for three major areas of NextGen research and development: Concepts across multiple operational domains such as the gate-to-gate trajectory-based operations concept; Concepts related to revolutionary operations such as the seamless and widespread integration of large and small Unmanned Aerial System (UAS) vehicles throughout U.S. airspace; Real-time system-wide safety assurance technologies to allow safe, increasingly autonomous aviation operations. SNTB is primarily accessed through a web browser. A set of secure support services are provided to simplify all aspects of real-time, human-in-the-loop and automation-in-the-loop simulations from design (i.e., prior to execution) through analysis (i.e., after execution). These services include simulation architecture and asset configuration; scenario generation; command, control and monitoring; and analysis support.

  12. Constructing Aligned Assessments Using Automated Test Construction

    ERIC Educational Resources Information Center

    Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui

    2013-01-01

    We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…

  13. Simulation based optimization on automated fibre placement process

    NASA Astrophysics Data System (ADS)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  14. Automated Test Systems for Toxic Vapor Detectors

    NASA Technical Reports Server (NTRS)

    Mattson, C. B.; Hammond, T. A.; Schwindt, C. J.

    1997-01-01

    The NASA Toxic Vapor Detection Laboratory (TVDL) at the Kennedy Space Center (KSC), Florida, has been using Personal Computer based Data Acquisition and Control Systems (PCDAS) for about nine years. These systems control the generation of toxic vapors of known concentrations under controlled conditions of temperature and humidity. The PCDAS also logs the test conditions and the test article responses in data files for analysis by standard spreadsheets or custom programs. The PCDAS was originally developed to perform standardized qualification and acceptance tests in a search for a commercial off-the-shelf (COTS) toxic vapor detector to replace the hydrazine detectors for the Space Shuttle launch pad. It has since become standard test equipment for the TVDL and is indispensable in producing calibration standards for the new hydrazine monitors at the 10 part per billion (ppb) level. The standard TVDL PCDAS can control two toxic vapor generators (TVG's) with three channels each and two flow/ temperature / humidity (FTH) controllers and it can record data from up to six toxic vapor detectors (TVD's) under test and can deliver flows from 5 to 50 liters per minute (L/m) at temperatures from near zero to 50 degrees Celsius (C) using an environmental chamber to maintain the sample temperature. The concentration range for toxic vapors depends on the permeation source installed in the TVG. The PCDAS can provide closed loop control of temperature and humidity to two sample vessels, typically one for zero gas and one for the standard gas. This is required at very low toxic vapor concentrations to minimize the time required to passivate the sample delivery system. Recently, there have been several requests for information about the PCDAS by other laboratories with similar needs, both on and off KSC. The purpose of this paper is to inform the toxic vapor detection community of the current status and planned upgrades to the automated testing of toxic vapor detectors at the Kennedy Space Center.

  15. Automated Test Systems for Toxic Vapor Detectors

    NASA Technical Reports Server (NTRS)

    Mattson, C. B.; Hammond, T. A.; Schwindt, C. J.

    1997-01-01

    The NASA Toxic Vapor Detection Laboratory (TVDL) at the Kennedy Space Center (KSC), Florida, has been using Personal Computer based Data Acquisition and Control Systems (PCDAS) for about nine years. These systems control the generation of toxic vapors of known concentrations under controlled conditions of temperature and humidity. The PCDAS also logs the test conditions and the test article responses in data files for analysis by standard spreadsheets or custom programs. The PCDAS was originally developed to perform standardized qualification and acceptance tests in a search for a commercial off-the-shelf (COTS) toxic vapor detector to replace the hydrazine detectors for the Space Shuttle launch pad. It has since become standard test equipment for the TVDL and is indispensable in producing calibration standards for the new hydrazine monitors at the 10 part per billion (ppb) level. The standard TVDL PCDAS can control two toxic vapor generators (TVG's) with three channels each and two flow/temperature/humidity (FIFH) controllers and it can record data from up to six toxic vapor detectors (TVD's) under test and can deliver flows from 5 to 50 liters per minute (L/m) at temperatures from near zero to 50 degrees Celsius (C) using an environmental chamber to maintain the sample temperature. The concentration range for toxic vapors depends on the permeation source installed in the TVG. The PCDAS can provide closed loop control of temperature and humidity to two sample vessels, typically one for zero gas and one for the standard gas. This is required at very low toxic vapor concentrations to minimize the time required to passivate the sample delivery system. Recently, there have been several requests for information about the PCDAS by other laboratories with similar needs, both on and off KSC. The purpose of this paper is to inform the toxic vapor detection community of the current status and planned upgrades to the automated testing of toxic vapor detectors at the Kennedy Space Center.

  16. Automation of steam generator services at public service electric & gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cruickshank, H.; Wray, J.; Scull, D.

    1995-03-01

    Public Service Electric & Gas takes an aggressive approach to pursuing new exposure reduction techniques. Evaluation of historic outage exposure shows that over the last eight refueling outages, primary steam generator work has averaged sixty-six (66) person-rem, or, approximately tewenty-five percent (25%) of the general outage exposure at Salem Station. This maintenance evolution represents the largest percentage of exposure for any single activity. Because of this, primary steam generator work represents an excellent opportunity for the development of significant exposure reduction techniques. A study of primary steam generator maintenance activities demonstrated that seventy-five percent (75%) of radiation exposure was duemore » to work activities of the primary steam generator platform, and that development of automated methods for performing these activities was worth pursuing. Existing robotics systems were examined and it was found that a new approach would have to be developed. This resulted in a joint research and development project between Westinghouse and Public Service Electric & Gas to develop an automated system of accomplishing the Health Physics functions on the primary steam generator platform. R.O.M.M.R.S. (Remotely Operated Managed Maintenance Robotics System) was the result of this venture.« less

  17. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  18. Finding the ’RITE’ Acquisition Environment for Navy C2 Software

    DTIC Science & Technology

    2015-05-01

    Boiler plate contract language - Gov purpose Rights • Adding expectation of quality to contracting language • Template SOW’s created Pr...Debugger MCCABE IQ Static Analysis Cyclomatic Complexity and KSLOC. All Languages HP Fortify Security Scan STIG and Vulnerabilities Security & IA...GSSAT (GOTS) Security Scan STIG and Vulnerabilities AutoIT Automated Test Scripting Engine for Automation Functional Testing TestComplete Automated

  19. Comparison study of membrane filtration direct count and an automated coliform and Escherichia coli detection system for on-site water quality testing.

    PubMed

    Habash, Marc; Johns, Robert

    2009-10-01

    This study compared an automated Escherichia coli and coliform detection system with the membrane filtration direct count technique for water testing. The automated instrument performed equal to or better than the membrane filtration test in analyzing E. coli-spiked samples and blind samples with interference from Proteus vulgaris or Aeromonas hydrophila.

  20. NASA - easyJet Collaboration on the Human Factors Monitoring Program (HFMP) Study

    NASA Technical Reports Server (NTRS)

    Srivistava, Ashok N.; Barton, Phil

    2012-01-01

    This is the first annual report jointly prepared by NASA and easyJet on the work performed under the agreement to collaborate on a study of the many factors entailed in flight - and cabin-crew fatigue and documenting the decreases in performance associated with fatigue. The objective of this Agreement is to generate reliable, automated procedures that improve understanding of the levels and characteristics of flight - and cabin-crew fatigue factors, both latent and proximate, whose confluence will likely result in unacceptable flight crew performance. This study entails the analyses of numerical and textual data collected during operational flights. NASA and easyJet are both interested in assessing and testing NASA s automated capabilities for extracting operationally significant information from very large, diverse (textual and numerical) databases, much larger than can be handled practically by human experts.

  1. Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts

    NASA Astrophysics Data System (ADS)

    Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.

    2015-12-01

    The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.

  2. SciBox, an end-to-end automated science planning and commanding system

    NASA Astrophysics Data System (ADS)

    Choo, Teck H.; Murchie, Scott L.; Bedini, Peter D.; Steele, R. Josh; Skura, Joseph P.; Nguyen, Lillian; Nair, Hari; Lucks, Michael; Berman, Alice F.; McGovern, James A.; Turner, F. Scott

    2014-01-01

    SciBox is a new technology for planning and commanding science operations for Earth-orbital and planetary space missions. It has been incrementally developed since 2001 and demonstrated on several spaceflight projects. The technology has matured to the point that it is now being used to plan and command all orbital science operations for the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission to Mercury. SciBox encompasses the derivation of observing sequences from science objectives, the scheduling of those sequences, the generation of spacecraft and instrument commands, and the validation of those commands prior to uploading to the spacecraft. Although the process is automated, science and observing requirements are incorporated at each step by a series of rules and parameters to optimize observing opportunities, which are tested and validated through simulation and review. Except for limited special operations and tests, there is no manual scheduling of observations or construction of command sequences. SciBox reduces the lead time for operations planning by shortening the time-consuming coordination process, reduces cost by automating the labor-intensive processes of human-in-the-loop adjudication of observing priorities, reduces operations risk by systematically checking constraints, and maximizes science return by fully evaluating the trade space of observing opportunities to meet MESSENGER science priorities within spacecraft recorder, downlink, scheduling, and orbital-geometry constraints.

  3. Automated Image Analysis of HER2 Fluorescence In Situ Hybridization to Refine Definitions of Genetic Heterogeneity in Breast Cancer Tissue

    PubMed Central

    Radziuviene, Gedmante; Rasmusson, Allan; Augulis, Renaldas; Lesciute-Krilaviciene, Daiva; Laurinaviciene, Aida; Clim, Eduard

    2017-01-01

    Human epidermal growth factor receptor 2 gene- (HER2-) targeted therapy for breast cancer relies primarily on HER2 overexpression established by immunohistochemistry (IHC) with borderline cases being further tested for amplification by fluorescence in situ hybridization (FISH). Manual interpretation of HER2 FISH is based on a limited number of cells and rather complex definitions of equivocal, polysomic, and genetically heterogeneous (GH) cases. Image analysis (IA) can extract high-capacity data and potentially improve HER2 testing in borderline cases. We investigated statistically derived indicators of HER2 heterogeneity in HER2 FISH data obtained by automated IA of 50 IHC borderline (2+) cases of invasive ductal breast carcinoma. Overall, IA significantly underestimated the conventional HER2, CEP17 counts, and HER2/CEP17 ratio; however, it collected more amplified cells in some cases below the lower limit of GH definition by manual procedure. Indicators for amplification, polysomy, and bimodality were extracted by factor analysis and allowed clustering of the tumors into amplified, nonamplified, and equivocal/polysomy categories. The bimodality indicator provided independent cell diversity characteristics for all clusters. Tumors classified as bimodal only partially coincided with the conventional GH heterogeneity category. We conclude that automated high-capacity nonselective tumor cell assay can generate evidence-based HER2 intratumor heterogeneity indicators to refine GH definitions. PMID:28752092

  4. Automated Image Analysis of HER2 Fluorescence In Situ Hybridization to Refine Definitions of Genetic Heterogeneity in Breast Cancer Tissue.

    PubMed

    Radziuviene, Gedmante; Rasmusson, Allan; Augulis, Renaldas; Lesciute-Krilaviciene, Daiva; Laurinaviciene, Aida; Clim, Eduard; Laurinavicius, Arvydas

    2017-01-01

    Human epidermal growth factor receptor 2 gene- (HER2-) targeted therapy for breast cancer relies primarily on HER2 overexpression established by immunohistochemistry (IHC) with borderline cases being further tested for amplification by fluorescence in situ hybridization (FISH). Manual interpretation of HER2 FISH is based on a limited number of cells and rather complex definitions of equivocal, polysomic, and genetically heterogeneous (GH) cases. Image analysis (IA) can extract high-capacity data and potentially improve HER2 testing in borderline cases. We investigated statistically derived indicators of HER2 heterogeneity in HER2 FISH data obtained by automated IA of 50 IHC borderline (2+) cases of invasive ductal breast carcinoma. Overall, IA significantly underestimated the conventional HER2, CEP17 counts, and HER2/CEP17 ratio; however, it collected more amplified cells in some cases below the lower limit of GH definition by manual procedure. Indicators for amplification, polysomy, and bimodality were extracted by factor analysis and allowed clustering of the tumors into amplified, nonamplified, and equivocal/polysomy categories. The bimodality indicator provided independent cell diversity characteristics for all clusters. Tumors classified as bimodal only partially coincided with the conventional GH heterogeneity category. We conclude that automated high-capacity nonselective tumor cell assay can generate evidence-based HER2 intratumor heterogeneity indicators to refine GH definitions.

  5. A Fully-Automated Subcortical and Ventricular Shape Generation Pipeline Preserving Smoothness and Anatomical Topology

    PubMed Central

    Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J.; Paulsen, Jane S.; Miller, Michael I.

    2018-01-01

    In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans. PMID:29867332

  6. A Fully-Automated Subcortical and Ventricular Shape Generation Pipeline Preserving Smoothness and Anatomical Topology.

    PubMed

    Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J; Paulsen, Jane S; Miller, Michael I

    2018-01-01

    In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans.

  7. Implementation of Testing Equipment for Asphalt Materials : Tech Summary

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt material and mixture testing were evaluated under this study. Each of these devices is designed to reduce testing time considerably and reduce operator error by automating the testing process. The Thery...

  8. Implementation of testing equipment for asphalt materials : tech summary.

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt material and mixture testing were evaluated : under this study. Each of these devices is designed to reduce testing time considerably and reduce : operator error by automating the testing process. The T...

  9. Cost-effectiveness analysis of the optimal threshold of an automated immunochemical test for colorectal cancer screening: performances of immunochemical colorectal cancer screening.

    PubMed

    Berchi, Célia; Guittet, Lydia; Bouvier, Véronique; Launoy, Guy

    2010-01-01

    Most industrialized countries, including France, have undertaken to generalize colorectal cancer screening using guaiac fecal occult blood tests (G-FOBT). However, recent researches demonstrate that immunochemical fecal occult blood tests (I-FOBT) are more effective than G-FOBT. Moreover, new generation I-FOBT benefits from a quantitative reading technique allowing the positivity threshold to be chosen, hence offering the best balance between effectiveness and cost. We aimed at comparing the cost and the clinical performance of one round of screening using I-FOBT at different positivity thresholds to those obtained with G-FOBT to determine the optimal cut-off for I-FOBT. Data were derived from an experiment conducted from June 2004 to December 2005 in Calvados (France) where 20,322 inhabitants aged 50-74 years performed both I-FOBT and G-FOBT. Clinical performance was assessed by the number of advanced tumors screened, including large adenomas and cancers. Costs were assessed by the French Social Security Board and included only direct costs. Screening using I-FOBT resulted in better health outcomes and lower costs than screening using G-FOBT for thresholds comprised between 75 and 93 ng/ml. I-FOBT at 55 ng/ml also offers a satisfactory alternative to G-FOBT, because it is 1.8-fold more effective than G-FOBT, without increasing the number of unnecessary colonoscopies, and at an extra cost of 2,519 euros per advanced tumor screened. The use of an automated I-FOBT at 75 ng/ml would guarantee more efficient screening than currently used G-FOBT. Health authorities in industrialized countries should consider the replacement of G-FOBT by an automated I-FOBT test in the near future.

  10. A modular, prospective, semi-automated drug safety monitoring system for use in a distributed data environment.

    PubMed

    Gagne, Joshua J; Wang, Shirley V; Rassen, Jeremy A; Schneeweiss, Sebastian

    2014-06-01

    The aim of this study was to develop and test a semi-automated process for conducting routine active safety monitoring for new drugs in a network of electronic healthcare databases. We built a modular program that semi-automatically performs cohort identification, confounding adjustment, diagnostic checks, aggregation and effect estimation across multiple databases, and application of a sequential alerting algorithm. During beta-testing, we applied the system to five databases to evaluate nine examples emulating prospective monitoring with retrospective data (five pairs for which we expected signals, two negative controls, and two examples for which it was uncertain whether a signal would be expected): cerivastatin versus atorvastatin and rhabdomyolysis; paroxetine versus tricyclic antidepressants and gastrointestinal bleed; lisinopril versus angiotensin receptor blockers and angioedema; ciprofloxacin versus macrolide antibiotics and Achilles tendon rupture; rofecoxib versus non-selective non-steroidal anti-inflammatory drugs (ns-NSAIDs) and myocardial infarction; telithromycin versus azithromycin and hepatotoxicity; rosuvastatin versus atorvastatin and diabetes and rhabdomyolysis; and celecoxib versus ns-NSAIDs and myocardial infarction. We describe the program, the necessary inputs, and the assumed data environment. In beta-testing, the system generated four alerts, all among positive control examples (i.e., lisinopril and angioedema; rofecoxib and myocardial infarction; ciprofloxacin and tendon rupture; and cerivastatin and rhabdomyolysis). Sequential effect estimates for each example were consistent in direction and magnitude with existing literature. Beta-testing across nine drug-outcome examples demonstrated the feasibility of the proposed semi-automated prospective monitoring approach. In retrospective assessments, the system identified an increased risk of myocardial infarction with rofecoxib and an increased risk of rhabdomyolysis with cerivastatin years before these drugs were withdrawn from the market. Copyright © 2014 John Wiley & Sons, Ltd.

  11. DAME: planetary-prototype drilling automation.

    PubMed

    Glass, B; Cannon, H; Branson, M; Hanagud, S; Paulsen, G

    2008-06-01

    We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.

  12. DAME: Planetary-Prototype Drilling Automation

    NASA Astrophysics Data System (ADS)

    Glass, B.; Cannon, H.; Branson, M.; Hanagud, S.; Paulsen, G.

    2008-06-01

    We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.

  13. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    Hagerty, J. J.

    1981-01-01

    Progress in the development of automated solar cell and module production is reported. The unimate robot is programmed for the final 35 cell pattern to be used in the fabrication of the deliverable modules. The mechanical construction of the automated lamination station and final assembly station phases are completed and the first operational testing is underway. The final controlling program is written and optimized. The glass reinforced concrete (GRC) panels to be used for testing and deliverables are in production. Test routines are grouped together and defined to produce the final control program.

  14. Cassini-Huygens maneuver automation for navigation

    NASA Technical Reports Server (NTRS)

    Goodson, Troy; Attiyah, Amy; Buffington, Brent; Hahn, Yungsun; Pojman, Joan; Stavert, Bob; Strange, Nathan; Stumpf, Paul; Wagner, Sean; Wolff, Peter; hide

    2006-01-01

    Many times during the Cassini-Huygens mission to Saturn, propulsive maneuvers must be spaced so closely together that there isn't enough time or workforce to execute the maneuver-related software manually, one subsystem at a time. Automation is required. Automating the maneuver design process has involved close cooperation between teams. We present the contribution from the Navigation system. In scope, this includes trajectory propagation and search, generation of ephemerides, general tasks such as email notification and file transfer, and presentation materials. The software has been used to help understand maneuver optimization results, Huygens probe delivery statistics, and Saturn ring-plane crossing geometry. The Maneuver Automation Software (MAS), developed for the Cassini-Huygens program enables frequent maneuvers by handling mundane tasks such as creation of deliverable files, file delivery, generation and transmission of email announcements, generation of presentation material and other supporting documentation. By hand, these tasks took up hours, if not days, of work for each maneuver. Automated, these tasks may be completed in under an hour. During the cruise trajectory the spacing of maneuvers was such that development of a maneuver design could span about a month, involving several other processes in addition to that described, above. Often, about the last five days of this process covered the generation of a final design using an updated orbit-determination estimate. To support the tour trajectory, the orbit determination data cut-off of five days before the maneuver needed to be reduced to approximately one day and the whole maneuver development process needed to be reduced to less than a week..

  15. 75 FR 6770 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-10

    ... interface with AUTOM via an Exchange approved proprietary electronic quoting device in eligible options to... to generate and submit option quotations electronically through AUTOM in eligible options to which...

  16. Xenon International Automated Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-08-05

    The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.

  17. Titanium(IV) isopropoxide mediated solution phase reductive amination on an automated platform: application in the generation of urea and amide libraries.

    PubMed

    Bhattacharyya, S; Fan, L; Vo, L; Labadie, J

    2000-04-01

    Amine libraries and their derivatives are important targets for high throughput synthesis because of their versatility as medicinal agents and agrochemicals. As a part of our efforts towards automated chemical library synthesis, a titanium(IV) isopropoxide mediated solution phase reductive amination protocol was successfully translated to automation on the Trident(TM) library synthesizer of Argonaut Technologies. An array of 24 secondary amines was prepared in high yield and purity from 4 primary amines and 6 carbonyl compounds. These secondary amines were further utilized in a split synthesis to generate libraries of ureas, amides and sulfonamides in solution phase on the Trident(TM). The automated runs included 192 reactions to synthesize 96 ureas in duplicate and 96 reactions to synthesize 48 amides and 48 sulfonamides. A number of polymer-assisted solution phase protocols were employed for parallel work-up and purification of the products in each step.

  18. Cognitive anchoring on self-generated decisions reduces operator reliance on automated diagnostic aids.

    PubMed

    Madhavan, Poornima; Wiegmann, Douglas A

    2005-01-01

    Automation users often disagree with diagnostic aids that are imperfectly reliable. The extent to which users' agreements with an aid are anchored to their personal, self-generated diagnoses was explored. Participants (N = 75) performed 200 trials in which they diagnosed pump failures using an imperfectly reliable automated aid. One group (nonforced anchor, n = 50) provided diagnoses only after consulting the aid. Another group (forced anchor, n = 25) provided diagnoses both before and after receiving feedback from the aid. Within the nonforced anchor group, participants' self-reported tendency to prediagnose system failures significantly predicted their tendency to disagree with the aid, revealing a cognitive anchoring effect. Agreement rates of participants in the forced anchor group indicated that public commitment to a diagnosis did not strengthen this effect. Potential applications include the development of methods for reducing cognitive anchoring effects and improving automation utilization in high-risk domains.

  19. 46 CFR 61.40-3 - Design verification testing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...

  20. Test/score/report: Simulation techniques for automating the test process

    NASA Technical Reports Server (NTRS)

    Hageman, Barbara H.; Sigman, Clayton B.; Koslosky, John T.

    1994-01-01

    A Test/Score/Report capability is currently being developed for the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) system which will automate testing of the Goddard Space Flight Center (GSFC) Payload Operations Control Center (POCC) and Mission Operations Center (MOC) software in three areas: telemetry decommutation, spacecraft command processing, and spacecraft memory load and dump processing. Automated computer control of the acceptance test process is one of the primary goals of a test team. With the proper simulation tools and user interface, the task of acceptance testing, regression testing, and repeatability of specific test procedures of a ground data system can be a simpler task. Ideally, the goal for complete automation would be to plug the operational deliverable into the simulator, press the start button, execute the test procedure, accumulate and analyze the data, score the results, and report the results to the test team along with a go/no recommendation to the test team. In practice, this may not be possible because of inadequate test tools, pressures of schedules, limited resources, etc. Most tests are accomplished using a certain degree of automation and test procedures that are labor intensive. This paper discusses some simulation techniques that can improve the automation of the test process. The TASS system tests the POCC/MOC software and provides a score based on the test results. The TASS system displays statistics on the success of the POCC/MOC system processing in each of the three areas as well as event messages pertaining to the Test/Score/Report processing. The TASS system also provides formatted reports documenting each step performed during the tests and the results of each step. A prototype of the Test/Score/Report capability is available and currently being used to test some POCC/MOC software deliveries. When this capability is fully operational it should greatly reduce the time necessary to test a POCC/MOC software delivery, as well as improve the quality of the test process.

  1. IceVal DatAssistant: An Interactive, Automated Icing Data Management System

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Wright, William B.

    2008-01-01

    As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions documented during these tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy for managing this data. To address the situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database. Simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and are linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.

  2. IceVal DatAssistant: An Interactive, Automated Icing Data Management System

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Wright, William B.

    2008-01-01

    As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions during those tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy to manage the resulting data. To address this situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database; and simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.

  3. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  4. Space station automation of common module power management and distribution

    NASA Technical Reports Server (NTRS)

    Miller, W.; Jones, E.; Ashworth, B.; Riedesel, J.; Myers, C.; Freeman, K.; Steele, D.; Palmer, R.; Walsh, R.; Gohring, J.

    1989-01-01

    The purpose is to automate a breadboard level Power Management and Distribution (PMAD) system which possesses many functional characteristics of a specified Space Station power system. The automation system was built upon 20 kHz ac source with redundancy of the power buses. There are two power distribution control units which furnish power to six load centers which in turn enable load circuits based upon a system generated schedule. The progress in building this specified autonomous system is described. Automation of Space Station Module PMAD was accomplished by segmenting the complete task in the following four independent tasks: (1) develop a detailed approach for PMAD automation; (2) define the software and hardware elements of automation; (3) develop the automation system for the PMAD breadboard; and (4) select an appropriate host processing environment.

  5. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    NASA Astrophysics Data System (ADS)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  6. Exploring the Use of a Test Automation Framework

    NASA Technical Reports Server (NTRS)

    Cervantes, Alex

    2009-01-01

    It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.

  7. Experimental Evaluation of an Integrated Datalink and Automation-Based Strategic Trajectory Concept

    NASA Technical Reports Server (NTRS)

    Mueller, Eric

    2007-01-01

    This paper presents research on the interoperability of trajectory-based automation concepts and technologies with modern Flight Management Systems and datalink communication available on many of today s commercial aircraft. A tight integration of trajectory-based ground automation systems with the aircraft Flight Management System through datalink will enable mid-term and far-term benefits from trajectory-based automation methods. A two-way datalink connection between the trajectory-based automation resident in the Center/TRACON Automation System and the Future Air Navigation System-1 integrated FMS/datalink in NASA Ames B747-400 Level D simulator has been established and extensive simulation of the use of datalink messages to generate strategic trajectories completed. A strategic trajectory is defined as an aircraft deviation needed to solve a conflict or honor a route request and then merge the aircraft back to its nominal preferred trajectory using a single continuous trajectory clearance. Engineers on the ground side of the datalink generated lateral and vertical trajectory clearances and transmitted them to the Flight Management System of the 747; the airborne automation then flew the new trajectory without human intervention, requiring the flight crew only to review and to accept the trajectory. This simulation established the protocols needed for a significant majority of the trajectory change types required to solve a traffic conflict or deviate around weather. This demonstration provides a basis for understanding the requirements for integration of trajectory-based automation with current Flight Management Systems and datalink to support future National Airspace System operations.

  8. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation: Second Year Progress Report

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    Mesh generation has long been recognized as a bottleneck in the CFD process. While much research on automating the volume mesh generation process have been relatively successful,these methods rely on appropriate initial surface triangulation to work properly. Surface discretization has been one of the least automated steps in computational simulation due to its dependence on implicitly defined CAD surfaces and curves. Differences in CAD peometry engines manifest themselves in discrepancies in their interpretation of the same entities. This lack of "good" geometry causes significant problems for mesh generators, requiring users to "repair" the CAD geometry before mesh generation. The problem is exacerbated when CAD geometry is translated to other forms (e.g., IGES )which do not include important topological and construction information in addition to entity geometry. One technique to avoid these problems is to access the CAD geometry directly from the mesh generating software, rather than through files. By accessing the geometry model (not a discretized version) in its native environment, t h s a proach avoids translation to a format which can deplete the model of topological information. Our approach to enable models developed in the Denali software environment to directly access CAD geometry and functions is through an Application Programming Interface (API) known as CAPRI. CAPRI provides a layer of indirection through which CAD-specific data may be accessed by an application program using CAD-system neutral C and FORTRAN language function calls. CAPRI supports a general set of CAD operations such as truth testing, geometry construction and entity queries.

  9. Automated generation of lattice QCD Feynman rules

    NASA Astrophysics Data System (ADS)

    Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.

    2009-12-01

    The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.

  10. A study into the automation of cognitive assessment tasks for delivery via the telephone: lessons for developing remote monitoring applications for the elderly.

    PubMed

    D'Arcy, Shona; Rapcan, Viliam; Gali, Alessandra; Burke, Nicola; O'Connell, Gloria Crispino; Robertson, Ian H; Reilly, Richard B

    2013-01-01

    Cognitive assessments are valuable tools in assessing neurological conditions. They are critical in measuring deficits in cognitive function in an array of neurological disorders and during the ageing process. Automation of cognitive assessments is one way to address the increasing burden on medical resources for an ever increasing ageing population. This study investigated the suitability of using automated Interactive Voice Response (IVR) technology to deliver a suite of cognitive assessments to older adults using speech as the input modality. Several clinically valid and gold-standard cognitive assessments were selected for implementation in the IVR application. The IVR application was designed using human centred design principles to ensure the experience was as user friendly as possible. Sixty one participants completed two IVR assessments and one face to face (FF) assessment with a neuropsychologist. Completion rates for individual tests were inspected to identify those tests that are most suitable for administration via IVR technology. Interclass correlations were calculated to assess the reliability of the automated administration of the cognitive assessments across delivery modes. While all participants successfully completed all automated assessments, variability in the completion rates for different cognitive tests was observed. Statistical analysis found significant interclass correlations for certain cognitive tests between the different modes of administration. Analysis also suggests that an initial FF assessment reduces the variability in cognitive test scores when introducing automation into such an assessment. [corrected] This study has demonstrated the functional and cognitive reliability of administering specific cognitive tests using an automated, speech driven application. This study has defined the characteristics of existing cognitive tests that are suitable for such an automated delivery system and also informs on the limitations of other cognitive tests for this modality. This study presents recommendations for developing future large scale cognitive assessments.

  11. Library Research: A Ten Year Analysis of the Library Automation Marketplace: 1981-1990.

    ERIC Educational Resources Information Center

    Fivecoat, Martha H.

    This study focuses on the growth of the library automation market from 1981 to 1990. It draws on library automation data published annually in the Library Journal between 1981 and 1990. The data are used to examine: (1) the overall library system market trends based on the total and cumulative number of systems installed and revenue generated; (2)…

  12. Designing systems to satisfy their users - The coming changes in aviation weather and the development of a central weather processor

    NASA Technical Reports Server (NTRS)

    Bush, M. W.

    1984-01-01

    Attention is given to the development history of the Central Weather Processor (CWP) program of the Federal Aviation Administration. The CWP will interface with high speed digital communications links, accept data and information products from new sources, generate data processing products, and provide meteorologists with the capability to automate data retrieval and dissemination. The CWP's users are operational (air traffic controllers, meteorologists and pilots), institutional (logistics, maintenance, testing and evaluation personnel), and administrative.

  13. Managing computer-controlled operations

    NASA Technical Reports Server (NTRS)

    Plowden, J. B.

    1985-01-01

    A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.

  14. Rocket engine diagnostics using qualitative modeling techniques

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Maul, William; Meyer, Claudia; Sovie, Amy

    1992-01-01

    Researchers at NASA Lewis Research Center are presently developing qualitative modeling techniques for automated rocket engine diagnostics. A qualitative model of a turbopump interpropellant seal system has been created. The qualitative model describes the effects of seal failures on the system steady-state behavior. This model is able to diagnose the failure of particular seals in the system based on anomalous temperature and pressure values. The anomalous values input to the qualitative model are generated using numerical simulations. Diagnostic test cases include both single and multiple seal failures.

  15. Rocket engine diagnostics using qualitative modeling techniques

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Maul, William; Meyer, Claudia; Sovie, Amy

    1992-01-01

    Researchers at NASA Lewis Research Center are presently developing qualitative modeling techniques for automated rocket engine diagnostics. A qualitative model of a turbopump interpropellant seal system was created. The qualitative model describes the effects of seal failures on the system steady state behavior. This model is able to diagnose the failure of particular seals in the system based on anomalous temperature and pressure values. The anomalous values input to the qualitative model are generated using numerical simulations. Diagnostic test cases include both single and multiple seal failures.

  16. Knowledge requirements for automated inference of medical textbook markup.

    PubMed Central

    Berrios, D. C.; Kehler, A.; Fagan, L. M.

    1999-01-01

    Indexing medical text in journals or textbooks requires a tremendous amount of resources. We tested two algorithms for automatically indexing nouns, noun-modifiers, and noun phrases, and inferring selected binary relations between UMLS concepts in a textbook of infectious disease. Sixty-six percent of nouns and noun-modifiers and 81% of noun phrases were correctly matched to UMLS concepts. Semantic relations were identified with 100% specificity and 94% sensitivity. For some medical sub-domains, these algorithms could permit expeditious generation of more complex indexing. PMID:10566445

  17. Evaluation of three techniques for classifying urban land cover patterns using LANDSAT MSS data. [New Orleans, Louisiana

    NASA Technical Reports Server (NTRS)

    Baumann, P. R. (Principal Investigator)

    1979-01-01

    Three computer quantitative techniques for determining urban land cover patterns are evaluated. The techniques examined deal with the selection of training samples by an automated process, the overlaying of two scenes from different seasons of the year, and the use of individual pixels as training points. Evaluation is based on the number and type of land cover classes generated and the marks obtained from an accuracy test. New Orleans, Louisiana and its environs form the study area.

  18. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  19. Test-retest reliability of automated whole body and compartmental muscle volume measurements on a wide bore 3T MR system.

    PubMed

    Thomas, Marianna S; Newman, David; Leinhard, Olof Dahlqvist; Kasmai, Bahman; Greenwood, Richard; Malcolm, Paul N; Karlsson, Anette; Rosander, Johannes; Borga, Magnus; Toms, Andoni P

    2014-09-01

    To measure the test-retest reproducibility of an automated system for quantifying whole body and compartmental muscle volumes using wide bore 3 T MRI. Thirty volunteers stratified by body mass index underwent whole body 3 T MRI, two-point Dixon sequences, on two separate occasions. Water-fat separation was performed, with automated segmentation of whole body, torso, upper and lower leg volumes, and manually segmented lower leg muscle volumes. Mean automated total body muscle volume was 19·32 L (SD9·1) and 19·28 L (SD9·12) for first and second acquisitions (Intraclass correlation coefficient (ICC) = 1·0, 95% level of agreement -0·32-0·2 L). ICC for all automated test-retest muscle volumes were almost perfect (0·99-1·0) with 95% levels of agreement 1.8-6.6% of mean volume. Automated muscle volume measurements correlate closely with manual quantification (right lower leg: manual 1·68 L (2SD0·6) compared to automated 1·64 L (2SD 0·6), left lower leg: manual 1·69 L (2SD 0·64) compared to automated 1·63 L (SD0·61), correlation coefficients for automated and manual segmentation were 0·94-0·96). Fully automated whole body and compartmental muscle volume quantification can be achieved rapidly on a 3 T wide bore system with very low margins of error, excellent test-retest reliability and excellent correlation to manual segmentation in the lower leg. Sarcopaenia is an important reversible complication of a number of diseases. Manual quantification of muscle volume is time-consuming and expensive. Muscles can be imaged using in and out of phase MRI. Automated atlas-based segmentation can identify muscle groups. Automated muscle volume segmentation is reproducible and can replace manual measurements.

  20. Model-Based GN and C Simulation and Flight Software Development for Orion Missions beyond LEO

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan; Milenkovic, Zoran; Henry, Joel; Buttacoli, Michael

    2014-01-01

    For Orion missions beyond low Earth orbit (LEO), the Guidance, Navigation, and Control (GN&C) system is being developed using a model-based approach for simulation and flight software. Lessons learned from the development of GN&C algorithms and flight software for the Orion Exploration Flight Test One (EFT-1) vehicle have been applied to the development of further capabilities for Orion GN&C beyond EFT-1. Continuing the use of a Model-Based Development (MBD) approach with the Matlab®/Simulink® tool suite, the process for GN&C development and analysis has been largely improved. Furthermore, a model-based simulation environment in Simulink, rather than an external C-based simulation, greatly eases the process for development of flight algorithms. The benefits seen by employing lessons learned from EFT-1 are described, as well as the approach for implementing additional MBD techniques. Also detailed are the key enablers for improvements to the MBD process, including enhanced configuration management techniques for model-based software systems, automated code and artifact generation, and automated testing and integration.

  1. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn) by Measurement of Oil Content

    PubMed Central

    Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao

    2016-01-01

    One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427

  2. Automated eddy current inspection of Space Shuttle APU turbine wheel blades

    NASA Technical Reports Server (NTRS)

    Fisher, Jay L.; Rowland, Stephen N.; Stolte, Jeffrey S.; Salkowski, Charles

    1991-01-01

    An automated inspection system based on eddy current testing (ET) techniques has been developed to inspect turbine wheel blades on the APU used in NASA's Space Transportation system. The APU is a hydrazine-powered gas turbine with a 15-cm diameter Rene 41 turbine wheel, which has 123 first-stage blades and 123 second-stage blades. The flaw detection capability of the ET system is verified through comparison with fluorescent penetrant test results. Results of the comparison indicate that ET is capable of inspecting surfaces with very restrictive geometries. The ET capability requires development of probes with extremely small coils to allow inspection within 0.4 mm of the blade root and the leading and trailing edges of the blade and within a height restriction of less than 1 mm. The color 2D presentation of the ET data provided crack-growth pattern and length information similar to those found with visual techniques. It also provided visual clues to minimize geometry effects such as generated from blade edges, a neighoring blade, and changes in the blade thickness.

  3. Automation of testing modules of controller ELSY-ТМК

    NASA Astrophysics Data System (ADS)

    Dolotov, A. E.; Dolotova, R. G.; Petuhov, D. V.; Potapova, A. P.

    2017-01-01

    In modern life, there are means for automation of various processes which allow one to provide high quality standards of released products and to raise labour efficiency. In the given paper, the data on the automation of the test process of the ELSY-TMK controller [1] is presented. The ELSY-TMK programmed logic controller is an effective modular platform for construction of automation systems for small and average branches of industrial production. The modern and functional standard of communication and open environment of the logic controller give a powerful tool of wide spectrum applications for industrial automation. The algorithm allows one to test controller modules by operating the switching system and external devices faster and at a higher level of quality than a human without such means does.

  4. Predicting Pilot Behavior in Medium Scale Scenarios Using Game Theory and Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Yildiz, Yildiray; Agogino, Adrian; Brat, Guillaume

    2013-01-01

    Effective automation is critical in achieving the capacity and safety goals of the Next Generation Air Traffic System. Unfortunately creating integration and validation tools for such automation is difficult as the interactions between automation and their human counterparts is complex and unpredictable. This validation becomes even more difficult as we integrate wide-reaching technologies that affect the behavior of different decision makers in the system such as pilots, controllers and airlines. While overt short-term behavior changes can be explicitly modeled with traditional agent modeling systems, subtle behavior changes caused by the integration of new technologies may snowball into larger problems and be very hard to detect. To overcome these obstacles, we show how integration of new technologies can be validated by learning behavior models based on goals. In this framework, human participants are not modeled explicitly. Instead, their goals are modeled and through reinforcement learning their actions are predicted. The main advantage to this approach is that modeling is done within the context of the entire system allowing for accurate modeling of all participants as they interact as a whole. In addition such an approach allows for efficient trade studies and feasibility testing on a wide range of automation scenarios. The goal of this paper is to test that such an approach is feasible. To do this we implement this approach using a simple discrete-state learning system on a scenario where 50 aircraft need to self-navigate using Automatic Dependent Surveillance-Broadcast (ADS-B) information. In this scenario, we show how the approach can be used to predict the ability of pilots to adequately balance aircraft separation and fly efficient paths. We present results with several levels of complexity and airspace congestion.

  5. The terminal area automated path generation problem

    NASA Technical Reports Server (NTRS)

    Hsin, C.-C.

    1977-01-01

    The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.

  6. A New Real - Time Fault Detection Methodology for Systems Under Test. Phase 1

    NASA Technical Reports Server (NTRS)

    Johnson, Roger W.; Jayaram, Sanjay; Hull, Richard A.

    1998-01-01

    The purpose of this research is focussed on the identification/demonstration of critical technology innovations that will be applied to various applications viz. Detection of automated machine Health Monitoring (BM, real-time data analysis and control of Systems Under Test (SUT). This new innovation using a High Fidelity Dynamic Model-based Simulation (BFDMS) approach will be used to implement a real-time monitoring, Test and Evaluation (T&E) methodology including the transient behavior of the system under test. The unique element of this process control technique is the use of high fidelity, computer generated dynamic models to replicate the behavior of actual Systems Under Test (SUT). It will provide a dynamic simulation capability that becomes the reference truth model, from which comparisons are made with the actual raw/conditioned data from the test elements.

  7. 46 CFR 61.40-3 - Design verification testing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 2 2011-10-01 2011-10-01 false Design verification testing. 61.40-3 Section 61.40-3... INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...

  8. Variability of the QuantiFERON®-TB gold in-tube test using automated and manual methods.

    PubMed

    Whitworth, William C; Goodwin, Donald J; Racster, Laura; West, Kevin B; Chuke, Stella O; Daniels, Laura J; Campbell, Brandon H; Bohanon, Jamaria; Jaffar, Atheer T; Drane, Wanzer; Sjoberg, Paul A; Mazurek, Gerald H

    2014-01-01

    The QuantiFERON®-TB Gold In-Tube test (QFT-GIT) detects Mycobacterium tuberculosis (Mtb) infection by measuring release of interferon gamma (IFN-γ) when T-cells (in heparinized whole blood) are stimulated with specific Mtb antigens. The amount of IFN-γ is determined by enzyme-linked immunosorbent assay (ELISA). Automation of the ELISA method may reduce variability. To assess the impact of ELISA automation, we compared QFT-GIT results and variability when ELISAs were performed manually and with automation. Blood was collected into two sets of QFT-GIT tubes and processed at the same time. For each set, IFN-γ was measured in automated and manual ELISAs. Variability in interpretations and IFN-γ measurements was assessed between automated (A1 vs. A2) and manual (M1 vs. M2) ELISAs. Variability in IFN-γ measurements was also assessed on separate groups stratified by the mean of the four ELISAs. Subjects (N = 146) had two automated and two manual ELISAs completed. Overall, interpretations were discordant for 16 (11%) subjects. Excluding one subject with indeterminate results, 7 (4.8%) subjects had discordant automated interpretations and 10 (6.9%) subjects had discordant manual interpretations (p = 0.17). Quantitative variability was not uniform; within-subject variability was greater with higher IFN-γ measurements and with manual ELISAs. For subjects with mean TB Responses ±0.25 IU/mL of the 0.35 IU/mL cutoff, the within-subject standard deviation for two manual tests was 0.27 (CI95 = 0.22-0.37) IU/mL vs. 0.09 (CI95 = 0.07-0.12) IU/mL for two automated tests. QFT-GIT ELISA automation may reduce variability near the test cutoff. Methodological differences should be considered when interpreting and using IFN-γ release assays (IGRAs).

  9. Python Scripts for Automation of Current-Voltage Testing of Semiconductor Devices (FY17)

    DTIC Science & Technology

    2017-01-01

    ARL-TR-7923 ● JAN 2017 US Army Research Laboratory Python Scripts for Automation of Current- Voltage Testing of Semiconductor...manual device-testing procedures is reduced or eliminated through automation. This technical report includes scripts written in Python , version 2.7, used ...nothing. 3.1.9 Exit Program The script exits the entire program. Line 505, sys.exit(), uses the sys package that comes with Python to exit system

  10. The Changing Role of the Clinical Microbiology Laboratory in Defining Resistance in Gram-negatives.

    PubMed

    Endimiani, Andrea; Jacobs, Michael R

    2016-06-01

    The evolution of resistance in Gram-negatives has challenged the clinical microbiology laboratory to implement new methods for their detection. Multidrug-resistant strains present major challenges to conventional and new detection methods. More rapid pathogen identification and antimicrobial susceptibility testing have been developed for use directly on specimens, including fluorescence in situ hybridization tests, automated polymerase chain reaction systems, microarrays, mass spectroscopy, next-generation sequencing, and microfluidics. Review of these methods shows the advances that have been made in rapid detection of resistance in cultures, but limited progress in direct detection from specimens. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Next-Generation Sequencing for Infectious Disease Diagnosis and Management: A Report of the Association for Molecular Pathology.

    PubMed

    Lefterova, Martina I; Suarez, Carlos J; Banaei, Niaz; Pinsky, Benjamin A

    2015-11-01

    Next-generation sequencing (NGS) technologies are increasingly being used for diagnosis and monitoring of infectious diseases. Herein, we review the application of NGS in clinical microbiology, focusing on genotypic resistance testing, direct detection of unknown disease-associated pathogens in clinical specimens, investigation of microbial population diversity in the human host, and strain typing. We have organized the review into three main sections: i) applications in clinical virology, ii) applications in clinical bacteriology, mycobacteriology, and mycology, and iii) validation, quality control, and maintenance of proficiency. Although NGS holds enormous promise for clinical infectious disease testing, many challenges remain, including automation, standardizing technical protocols and bioinformatics pipelines, improving reference databases, establishing proficiency testing and quality control measures, and reducing cost and turnaround time, all of which would be necessary for widespread adoption of NGS in clinical microbiology laboratories. Copyright © 2015 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  12. On Feature Extraction from Large Scale Linear LiDAR Data

    NASA Astrophysics Data System (ADS)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are presented. Significant power demand is located in urban areas, where, theoretically, a large amount of building surface area is also available for solar panel installation. Therefore, property owners and power generation companies can benefit from a citywide solar potential map, which can provide available estimated annual solar energy at a given location. An efficient solar potential measurement is a prerequisite for an effective solar energy system in an urban area. In addition, the solar potential calculation from rooftops and building facades could open up a wide variety of options for solar panel installations. However, complex urban scenes make it hard to estimate the solar potential, partly because of shadows cast by the buildings. LiDAR-based 3D city models could possibly be the right technology for solar potential mapping. Although, most of the current LiDAR-based local solar potential assessment algorithms mainly address rooftop potential calculation, whereas building facades can contribute a significant amount of viable surface area for solar panel installation. In this paper, we introduce a new algorithm to calculate solar potential of both rooftop and building facades. Solar potential received by the rooftops and facades over the year are also investigated in the test area.

  13. Moles: Tool-Assisted Environment Isolation with Closures

    NASA Astrophysics Data System (ADS)

    de Halleux, Jonathan; Tillmann, Nikolai

    Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.

  14. Rapid SAW Sensor Development Tools

    NASA Technical Reports Server (NTRS)

    Wilson, William C.; Atkinson, Gary M.

    2007-01-01

    The lack of integrated design tools for Surface Acoustic Wave (SAW) devices has led us to develop tools for the design, modeling, analysis, and automatic layout generation of SAW devices. These tools enable rapid development of wireless SAW sensors. The tools developed have been designed to integrate into existing Electronic Design Automation (EDA) tools to take advantage of existing 3D modeling, and Finite Element Analysis (FEA). This paper presents the SAW design, modeling, analysis, and automated layout generation tools.

  15. Integrated Test Facility (ITF)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  16. Integrated Test Facility (ITF)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  17. Automated Blazar Light Curves Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Spencer James

    2017-07-27

    This presentation describes a problem and methodology pertaining to automated blazar light curves. Namely, optical variability patterns for blazars require the construction of light curves and in order to generate the light curves, data must be filtered before processing to ensure quality.

  18. Development of a Graphics Based Automated Emergency Response System (AERS) for Rail Transit Systems

    DOT National Transportation Integrated Search

    1989-05-01

    This report presents an overview of the second generation Automated Emergency Response System (AERS2). Developed to assist transit systems in responding effectively to emergency situations, AERS2 is a microcomputer-based information retrieval system ...

  19. Quality control methods for linear accelerator radiation and mechanical axes alignment.

    PubMed

    Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A

    2018-06-01

    The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis and the mechanical collimator rotation axis from the impact of field size asymmetry. The test suite can be performed in a reasonable time (30-35 min) due to simple phantom setup, prescription-based beam delivery, and automated image analysis. As well, it provides a clear description of the relationship between axes. After testing the sensitivity of the test suite to beam steering and mechanical errors, the results of the test suite were used to reduce the misalignment errors of the linac to less than 0.7-mm radius for all axes. The proposed test suite offers sub-millimeter assessment of the coincidence of the radiation and mechanical isocenters and the test automation reduces complexity with improved efficiency. The test suite results can be used to optimize the linear accelerator's radiation to mechanical isocenter alignment by beam steering and mechanical adjustment of gantry and couch. © 2018 American Association of Physicists in Medicine.

  20. Statechart Analysis with Symbolic PathFinder

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.

    2012-01-01

    We report here on our on-going work that addresses the automated analysis and test case generation for software systems modeled using multiple Statechart formalisms. The work is motivated by large programs such as NASA Exploration, that involve multiple systems that interact via safety-critical protocols and are designed with different Statechart variants. To verify these safety-critical systems, we have developed Polyglot, a framework for modeling and analysis of model-based software written using different Statechart formalisms. Polyglot uses a common intermediate representation with customizable Statechart semantics and leverages the analysis and test generation capabilities of the Symbolic PathFinder tool. Polyglot is used as follows: First, the structure of the Statechart model (expressed in Matlab Stateflow or Rational Rhapsody) is translated into a common intermediate representation (IR). The IR is then translated into Java code that represents the structure of the model. The semantics are provided as "pluggable" modules.

  1. Model Checking Artificial Intelligence Based Planners: Even the Best Laid Plans Must Be Verified

    NASA Technical Reports Server (NTRS)

    Smith, Margaret H.; Holzmann, Gerard J.; Cucullu, Gordon C., III; Smith, Benjamin D.

    2005-01-01

    Automated planning systems (APS) are gaining acceptance for use on NASA missions as evidenced by APS flown On missions such as Orbiter and Deep Space 1 both of which were commanded by onboard planning systems. The planning system takes high level goals and expands them onboard into a detailed of action fiat the spacecraft executes. The system must be verified to ensure that the automatically generated plans achieve the goals as expected and do not generate actions that would harm the spacecraft or mission. These systems are typically tested using empirical methods. Formal methods, such as model checking, offer exhaustive or measurable test coverage which leads to much greater confidence in correctness. This paper describes a formal method based on the SPIN model checker. This method guarantees that possible plans meet certain desirable properties. We express the input model in Promela, the language of SPIN and express the properties of desirable plans formally.

  2. Appendix C: Automated Vitrification of Mammalian Embryos on a Digital Microfluidic Device.

    PubMed

    Liu, Jun; Pyne, Derek G; Abdelgawad, Mohamed; Sun, Yu

    2017-01-01

    This chapter introduces a digital microfluidic device that automates sample preparation for mammalian embryo vitrification. Individual microdroplets manipulated on the microfluidic device were used as microvessels to transport a single mouse embryo through a complete vitrification procedure. Advantages of this approach, compared to manual operation and channel-based microfluidic vitrification, include automated operation, cryoprotectant concentration gradient generation, and feasibility of loading and retrieval of embryos.

  3. Requirements for Flight Testing Automated Terminal Service

    DOT National Transportation Integrated Search

    1977-05-01

    This report describes requirements for the flight tests of the baseline Automated Terminals Service (ATS) system. The overall objective of the flight test program is to evaluate the feasibility of the ATS concept. Within this objective there are two ...

  4. Computer-Generated Feedback on Student Writing

    ERIC Educational Resources Information Center

    Ware, Paige

    2011-01-01

    A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…

  5. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.

  6. Walter C. Williams Research Aircraft Integration Facility (RAIF)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  7. Cutting force measurement of electrical jigsaw by strain gauges

    NASA Astrophysics Data System (ADS)

    Kazup, L.; Varadine Szarka, A.

    2016-11-01

    This paper describes a measuring method based on strain gauges for accurate specification of electric jigsaw's cutting force. The goal of the measurement is to provide an overall perspective about generated forces in a jigsaw's gearbox during a cutting period. The lifetime of the tool is affected by these forces primarily. This analysis is part of the research and development project aiming to develop a special linear magnetic brake for realizing automatic lifetime tests of electric jigsaws or similar handheld tools. The accurate specification of cutting force facilitates to define realistic test cycles during the automatic lifetime test. The accuracy and precision resulted by the well described cutting force characteristic and the possibility of automation provide new dimension for lifetime testing of the handheld tools with alternating movement.

  8. Method and apparatus for automatically detecting patterns in digital point-ordered signals

    DOEpatents

    Brudnoy, David M.

    1998-01-01

    The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output.

  9. Method and apparatus for automatically detecting patterns in digital point-ordered signals

    DOEpatents

    Brudnoy, D.M.

    1998-10-20

    The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output. 14 figs.

  10. Passive Seismic Monitoring for Rockfall at Yucca Mountain: Concept Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, J; Twilley, K; Murvosh, H

    2003-03-03

    For the purpose of proof-testing a system intended to remotely monitor rockfall inside a potential radioactive waste repository at Yucca Mountain, a system of seismic sub-arrays will be deployed and tested on the surface of the mountain. The goal is to identify and locate rockfall events remotely using automated data collecting and processing techniques. We install seismometers on the ground surface, generate seismic energy to simulate rockfall in underground space beneath the array, and interpret the surface response to discriminate and locate the event. Data will be analyzed using matched-field processing, a generalized beam forming method for localizing discrete signals.more » Software is being developed to facilitate the processing. To date, a three-component sub-array has been installed and successfully tested.« less

  11. Reagent and labor cost optimization through automation of fluorescence in situ hybridization (FISH) with the VP 2000: an Italian case study.

    PubMed

    Zanatta, Lucia; Valori, Laura; Cappelletto, Eleonora; Pozzebon, Maria Elena; Pavan, Elisabetta; Dei Tos, Angelo Paolo; Merkle, Dennis

    2015-02-01

    In the modern molecular diagnostic laboratory, cost considerations are of paramount importance. Automation of complex molecular assays not only allows a laboratory to accommodate higher test volumes and throughput but also has a considerable impact on the cost of testing from the perspective of reagent costs, as well as hands-on time for skilled laboratory personnel. The following study tracked the cost of labor (hands-on time) and reagents for fluorescence in situ hybridization (FISH) testing in a routine, high-volume pathology and cytogenetics laboratory in Treviso, Italy, over a 2-y period (2011-2013). The laboratory automated FISH testing with the VP 2000 Processor, a deparaffinization, pretreatment, and special staining instrument produced by Abbott Molecular, and compared hands-on time and reagent costs to manual FISH testing. The results indicated significant cost and time saving when automating FISH with VP 2000 when more than six FISH tests were run per week. At 12 FISH assays per week, an approximate total cost reduction of 55% was observed. When running 46 FISH specimens per week, the cost saving increased to 89% versus manual testing. The results demonstrate that the VP 2000 processor can significantly reduce the cost of FISH testing in diagnostic laboratories. © 2014 Society for Laboratory Automation and Screening.

  12. LH750 hematology analyzers to identify malaria and dengue and distinguish them from other febrile illnesses.

    PubMed

    Sharma, P; Bhargava, M; Sukhachev, D; Datta, S; Wattal, C

    2014-02-01

    Tropical febrile illnesses such as malaria and dengue are challenging to differentiate clinically. Automated cellular indices from hematology analyzers may afford a preliminary rapid distinction. Blood count and VCS parameters from 114 malaria patients, 105 dengue patients, and 105 febrile controls without dengue or malaria were analyzed. Statistical discriminant functions were generated, and their diagnostic performances were assessed by ROC curve analysis. Three statistical functions were generated: (i) malaria-vs.-controls factor incorporating platelet count and standard deviations of lymphocyte volume and conductivity that identified malaria with 90.4% sensitivity, 88.6% specificity; (ii) dengue-vs.-controls factor incorporating platelet count, lymphocyte percentage and standard deviation of lymphocyte conductivity that identified dengue with 81.0% sensitivity and 77.1% specificity; and (iii) febrile-controls-vs.-malaria/dengue factor incorporating mean corpuscular hemoglobin concentration, neutrophil percentage, mean lymphocyte and monocyte volumes, and standard deviation of monocyte volume that distinguished malaria and dengue from other febrile illnesses with 85.1% sensitivity and 91.4% specificity. Leukocyte abnormalities quantitated by automated analyzers successfully identified malaria and dengue and distinguished them from other fevers. These economic discriminant functions can be rapidly calculated by analyzer software programs to generate electronic flags to trigger-specific testing. They could potentially transform diagnostic approaches to tropical febrile illnesses in cost-constrained settings. © 2013 John Wiley & Sons Ltd.

  13. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    NASA Astrophysics Data System (ADS)

    Wardaya, P. D.

    2014-02-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.

  14. Managing complexity in simulations of land surface and near-surface processes

    DOE PAGES

    Coon, Ethan T.; Moulton, J. David; Painter, Scott L.

    2016-01-12

    Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less

  15. Review of Integrated Noise Model (INM) Equations and Processes

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph

    2003-01-01

    The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).

  16. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  17. Wet Lab Accelerator: A Web-Based Application Democratizing Laboratory Automation for Synthetic Biology.

    PubMed

    Bates, Maxwell; Berliner, Aaron J; Lachoff, Joe; Jaschke, Paul R; Groban, Eli S

    2017-01-20

    Wet Lab Accelerator (WLA) is a cloud-based tool that allows a scientist to conduct biology via robotic control without the need for any programming knowledge. A drag and drop interface provides a convenient and user-friendly method of generating biological protocols. Graphically developed protocols are turned into programmatic instruction lists required to conduct experiments at the cloud laboratory Transcriptic. Prior to the development of WLA, biologists were required to write in a programming language called "Autoprotocol" in order to work with Transcriptic. WLA relies on a new abstraction layer we call "Omniprotocol" to convert the graphical experimental description into lower level Autoprotocol language, which then directs robots at Transcriptic. While WLA has only been tested at Transcriptic, the conversion of graphically laid out experimental steps into Autoprotocol is generic, allowing extension of WLA into other cloud laboratories in the future. WLA hopes to democratize biology by bringing automation to general biologists.

  18. LAMMPS integrated materials engine (LIME) for efficient automation of particle-based simulations: application to equation of state generation

    NASA Astrophysics Data System (ADS)

    Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.

    2017-07-01

    We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.

  19. Automated system for analyzing the activity of individual neurons

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.

    1993-01-01

    This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.

  20. Automated Planning for a Deep Space Communications Station

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Fisher, Forest; Mutz, Darren; Chien, Steve

    1999-01-01

    This paper describes the application of Artificial Intelligence planning techniques to the problem of antenna track plan generation for a NASA Deep Space Communications Station. Me described system enables an antenna communications station to automatically respond to a set of tracking goals by correctly configuring the appropriate hardware and software to provide the requested communication services. To perform this task, the Automated Scheduling and Planning Environment (ASPEN) has been applied to automatically produce antenna trucking plans that are tailored to support a set of input goals. In this paper, we describe the antenna automation problem, the ASPEN planning and scheduling system, how ASPEN is used to generate antenna track plans, the results of several technology demonstrations, and future work utilizing dynamic planning technology.

  1. Automation of Space Station module power management and distribution system

    NASA Technical Reports Server (NTRS)

    Bechtel, Robert; Weeks, Dave; Walls, Bryan

    1990-01-01

    Viewgraphs on automation of space station module (SSM) power management and distribution (PMAD) system are presented. Topics covered include: reasons for power system automation; SSM/PMAD approach to automation; SSM/PMAD test bed; SSM/PMAD topology; functional partitioning; SSM/PMAD control; rack level autonomy; FRAMES AI system; and future technology needs for power system automation.

  2. Automation software for a materials testing laboratory

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Bonacuse, Peter J.

    1990-01-01

    The software environment in use at the NASA-Lewis Research Center's High Temperature Fatigue and Structures Laboratory is reviewed. This software environment is aimed at supporting the tasks involved in performing materials behavior research. The features and capabilities of the approach to specifying a materials test include static and dynamic control mode switching, enabling multimode test control; dynamic alteration of the control waveform based upon events occurring in the response variables; precise control over the nature of both command waveform generation and data acquisition; and the nesting of waveform/data acquisition strategies so that material history dependencies may be explored. To eliminate repetitive tasks in the coventional research process, a communications network software system is established which provides file interchange and remote console capabilities.

  3. 46 CFR 130.480 - Test procedure and operations manual.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Test procedure and operations manual. 130.480 Section... VESSEL CONTROL, AND MISCELLANEOUS EQUIPMENT AND SYSTEMS Automation of Unattended Machinery Spaces § 130.480 Test procedure and operations manual. (a) A procedure for tests to be conducted on automated...

  4. Test oracle automation for V&V of an autonomous spacecraft's planner

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Smith, B.

    2001-01-01

    We built automation to assist the software testing efforts associated with the Remote Agent experiment. In particular, our focus was upon introducing test oracles into the testing of the planning and scheduling system component. This summary is intended to provide an overview of the work.

  5. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach

    NASA Astrophysics Data System (ADS)

    Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven

    2014-08-01

    Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.

  6. Automated Scoring for the "TOEFL Junior"® Comprehensive Writing and Speaking Test. Research Report. ETS RR-15-09

    ERIC Educational Resources Information Center

    Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel

    2015-01-01

    This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…

  7. SU-G-BRB-02: An Open-Source Software Analysis Library for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Yaldo, D

    Purpose: Routine linac quality assurance (QA) tests have become complex enough to require automation of most test analyses. A new data analysis software library was built that allows physicists to automate routine linear accelerator quality assurance tests. The package is open source, code tested, and benchmarked. Methods: Images and data were generated on a TrueBeam linac for the following routine QA tests: VMAT, starshot, CBCT, machine logs, Winston Lutz, and picket fence. The analysis library was built using the general programming language Python. Each test was analyzed with the library algorithms and compared to manual measurements taken at the timemore » of acquisition. Results: VMAT QA results agreed within 0.1% between the library and manual measurements. Machine logs (dynalogs & trajectory logs) were successfully parsed; mechanical axis positions were verified for accuracy and MLC fluence agreed well with EPID measurements. CBCT QA measurements were within 10 HU and 0.2mm where applicable. Winston Lutz isocenter size measurements were within 0.2mm of TrueBeam’s Machine Performance Check. Starshot analysis was within 0.2mm of the Winston Lutz results for the same conditions. Picket fence images with and without a known error showed that the library was capable of detecting MLC offsets within 0.02mm. Conclusion: A new routine QA software library has been benchmarked and is available for use by the community. The library is open-source and extensible for use in larger systems.« less

  8. Computational Ranking of Yerba Mate Small Molecules Based on Their Predicted Contribution to Antibacterial Activity against Methicillin-Resistant Staphylococcus aureus

    DOE PAGES

    Rempe, Caroline S.; Burris, Kellie P.; Woo, Hannah L.; ...

    2015-05-08

    We report that the aqueous extract of yerba mate, a South American tea beverage made from Ilex paraguariensis leaves, has demonstrated bactericidal and inhibitory activity against bacterial pathogens, including methicillin-resistant Staphylococcus aureus (MRSA). The gas chromatography-mass spectrometry (GC-MS) analysis of two unique fractions of yerba mate aqueous extract revealed 8 identifiable small molecules in those fractions with antimicrobial activity. For a more comprehensive analysis, a data analysis pipeline was assembled to prioritize compounds for antimicrobial testing against both MRSA and methicillin-sensitive S. aureus using forty-two unique fractions of the tea extract that were generated in duplicate, assayed for activity, andmore » analyzed with GC-MS. As validation of our automated analysis, we checked our predicted active compounds for activity in literature references and used authentic standards to test for antimicrobial activity. 3,4-dihydroxybenzaldehyde showed the most antibacterial activity against MRSA at low concentrations in our bioassays. In addition, quinic acid and quercetin were identified using random forests analysis and 5-hydroxy pipecolic acid was identified using linear discriminant analysis. We also generated a ranked list of unidentified compounds that may contribute to the antimicrobial activity of yerba mate against MRSA. Here we utilized GC-MS data to implement an automated analysis that resulted in a ranked list of compounds that likely contribute to the antimicrobial activity of aqueous yerba mate extract against MRSA.« less

  9. Computational Ranking of Yerba Mate Small Molecules Based on Their Predicted Contribution to Antibacterial Activity against Methicillin-Resistant Staphylococcus aureus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rempe, Caroline S.; Burris, Kellie P.; Woo, Hannah L.

    We report that the aqueous extract of yerba mate, a South American tea beverage made from Ilex paraguariensis leaves, has demonstrated bactericidal and inhibitory activity against bacterial pathogens, including methicillin-resistant Staphylococcus aureus (MRSA). The gas chromatography-mass spectrometry (GC-MS) analysis of two unique fractions of yerba mate aqueous extract revealed 8 identifiable small molecules in those fractions with antimicrobial activity. For a more comprehensive analysis, a data analysis pipeline was assembled to prioritize compounds for antimicrobial testing against both MRSA and methicillin-sensitive S. aureus using forty-two unique fractions of the tea extract that were generated in duplicate, assayed for activity, andmore » analyzed with GC-MS. As validation of our automated analysis, we checked our predicted active compounds for activity in literature references and used authentic standards to test for antimicrobial activity. 3,4-dihydroxybenzaldehyde showed the most antibacterial activity against MRSA at low concentrations in our bioassays. In addition, quinic acid and quercetin were identified using random forests analysis and 5-hydroxy pipecolic acid was identified using linear discriminant analysis. We also generated a ranked list of unidentified compounds that may contribute to the antimicrobial activity of yerba mate against MRSA. Here we utilized GC-MS data to implement an automated analysis that resulted in a ranked list of compounds that likely contribute to the antimicrobial activity of aqueous yerba mate extract against MRSA.« less

  10. Exponential error reduction in pretransfusion testing with automation.

    PubMed

    South, Susan F; Casina, Tony S; Li, Lily

    2012-08-01

    Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.

  11. Automated real-time software development

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Walker, Carrie K.; Turkovich, John J.

    1993-01-01

    A Computer-Aided Software Engineering (CASE) system has been developed at the Charles Stark Draper Laboratory (CSDL) under the direction of the NASA Langley Research Center. The CSDL CASE tool provides an automated method of generating source code and hard copy documentation from functional application engineering specifications. The goal is to significantly reduce the cost of developing and maintaining real-time scientific and engineering software while increasing system reliability. This paper describes CSDL CASE and discusses demonstrations that used the tool to automatically generate real-time application code.

  12. Ethics, finance, and automation: a preliminary survey of problems in high frequency trading.

    PubMed

    Davis, Michael; Kumiega, Andrew; Van Vliet, Ben

    2013-09-01

    All of finance is now automated, most notably high frequency trading. This paper examines the ethical implications of this fact. As automation is an interdisciplinary endeavor, we argue that the interfaces between the respective disciplines can lead to conflicting ethical perspectives; we also argue that existing disciplinary standards do not pay enough attention to the ethical problems automation generates. Conflicting perspectives undermine the protection those who rely on trading should have. Ethics in finance can be expanded to include organizational and industry-wide responsibilities to external market participants and society. As a starting point, quality management techniques can provide a foundation for a new cross-disciplinary ethical standard in the age of automation.

  13. Intraoperative Cochlear Implant Device Testing Utilizing an Automated Remote System: A Prospective Pilot Study.

    PubMed

    Lohmann, Amanda R; Carlson, Matthew L; Sladen, Douglas P

    2018-03-01

    Intraoperative cochlear implant device testing provides valuable information regarding device integrity, electrode position, and may assist with determining initial stimulation settings. Manual intraoperative device testing during cochlear implantation requires the time and expertise of a trained audiologist. The purpose of the current study is to investigate the feasibility of using automated remote intraoperative cochlear implant reverse telemetry testing as an alternative to standard testing. Prospective pilot study evaluating intraoperative remote automated impedance and Automatic Neural Response Telemetry (AutoNRT) testing in 34 consecutive cochlear implant surgeries using the Intraoperative Remote Assistant (Cochlear Nucleus CR120). In all cases, remote intraoperative device testing was performed by trained operating room staff. A comparison was made to the "gold standard" of manual testing by an experienced cochlear implant audiologist. Electrode position and absence of tip fold-over was confirmed using plain film x-ray. Automated remote reverse telemetry testing was successfully completed in all patients. Intraoperative x-ray demonstrated normal electrode position without tip fold-over. Average impedance values were significantly higher using standard testing versus CR120 remote testing (standard mean 10.7 kΩ, SD 1.2 vs. CR120 mean 7.5 kΩ, SD 0.7, p < 0.001). There was strong agreement between standard manual testing and remote automated testing with regard to the presence of open or short circuits along the array. There were, however, two cases in which standard testing identified an open circuit, when CR120 testing showed the circuit to be closed. Neural responses were successfully obtained in all patients using both systems. There was no difference in basal electrode responses (standard mean 195.0 μV, SD 14.10 vs. CR120 194.5 μV, SD 14.23; p = 0.7814); however, more favorable (lower μV amplitude) results were obtained with the remote automated system in the apical 10 electrodes (standard 185.4 μV, SD 11.69 vs. CR120 177.0 μV, SD 11.57; p value < 0.001). These preliminary data demonstrate that intraoperative cochlear implant device testing using a remote automated system is feasible. This system may be useful for cochlear implant programs with limited audiology support or for programs looking to streamline intraoperative device testing protocols. Future studies with larger patient enrollment are required to validate these promising, but preliminary, findings.

  14. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  15. Automated Seat Cushion for Pressure Ulcer Prevention Using Real-Time Mapping, Offloading, and Redistribution of Interface Pressure

    DTIC Science & Technology

    2016-10-01

    testing as well as finite element simulation. Automation and control testing has been completed on a 5x5 array of bubble actuators to verify pressure...mechanical behavior at varying loads and internal pressures both by experimental testing as well as finite element simulation. Automation and control...A finite element (FE) model of the bubble actuator was developed in the commercial software ANSYS in order to determine the deformation of the

  16. The light spot test: Measuring anxiety in mice in an automated home-cage environment.

    PubMed

    Aarts, Emmeke; Maroteaux, Gregoire; Loos, Maarten; Koopmans, Bastijn; Kovačević, Jovana; Smit, August B; Verhage, Matthijs; Sluis, Sophie van der

    2015-11-01

    Behavioral tests of animals in a controlled experimental setting provide a valuable tool to advance understanding of genotype-phenotype relations, and to study the effects of genetic and environmental manipulations. To optimally benefit from the increasing numbers of genetically engineered mice, reliable high-throughput methods for comprehensive behavioral phenotyping of mice lines have become a necessity. Here, we describe the development and validation of an anxiety test, the light spot test, that allows for unsupervised, automated, high-throughput testing of mice in a home-cage system. This automated behavioral test circumvents bias introduced by pretest handling, and enables recording both baseline behavior and the behavioral test response over a prolonged period of time. We demonstrate that the light spot test induces a behavioral response in C57BL/6J mice. This behavior reverts to baseline when the aversive stimulus is switched off, and is blunted by treatment with the anxiolytic drug Diazepam, demonstrating predictive validity of the assay, and indicating that the observed behavioral response has a significant anxiety component. Also, we investigated the effectiveness of the light spot test as part of sequential testing for different behavioral aspects in the home-cage. Two learning tests, administered prior to the light spot test, affected the light spot test parameters. The light spot test is a novel, automated assay for anxiety-related high-throughput testing of mice in an automated home-cage environment, allowing for both comprehensive behavioral phenotyping of mice, and rapid screening of pharmacological compounds. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Iterative dataset optimization in automated planning: Implementation for breast and rectal cancer radiotherapy.

    PubMed

    Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang

    2017-06-01

    To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.

  18. An Automated Procedure for Evaluating Song Imitation

    PubMed Central

    Mandelblat-Cerf, Yael; Fee, Michale S.

    2014-01-01

    Songbirds have emerged as an excellent model system to understand the neural basis of vocal and motor learning. Like humans, songbirds learn to imitate the vocalizations of their parents or other conspecific “tutors.” Young songbirds learn by comparing their own vocalizations to the memory of their tutor song, slowly improving until over the course of several weeks they can achieve an excellent imitation of the tutor. Because of the slow progression of vocal learning, and the large amounts of singing generated, automated algorithms for quantifying vocal imitation have become increasingly important for studying the mechanisms underlying this process. However, methodologies for quantifying song imitation are complicated by the highly variable songs of either juvenile birds or those that learn poorly because of experimental manipulations. Here we present a method for the evaluation of song imitation that incorporates two innovations: First, an automated procedure for selecting pupil song segments, and, second, a new algorithm, implemented in Matlab, for computing both song acoustic and sequence similarity. We tested our procedure using zebra finch song and determined a set of acoustic features for which the algorithm optimally differentiates between similar and non-similar songs. PMID:24809510

  19. Design and implementation of an automated compound management system in support of lead optimization.

    PubMed

    Quintero, Catherine; Kariv, Ilona

    2009-06-01

    To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.

  20. Framework for Automated GD&T Inspection Using 3D Scanner

    NASA Astrophysics Data System (ADS)

    Pathak, Vimal Kumar; Singh, Amit Kumar; Sivadasan, M.; Singh, N. K.

    2018-04-01

    Geometric Dimensioning and Tolerancing (GD&T) is a typical dialect that helps designers, production faculty and quality monitors to convey design specifications in an effective and efficient manner. GD&T has been practiced since the start of machine component assembly but without overly naming it. However, in recent times industries have started increasingly emphasizing on it. One prominent area where most of the industries struggle with is quality inspection. Complete inspection process is mostly human intensive. Also, the use of conventional gauges and templates for inspection purpose highly depends on skill of workers and quality inspectors. In industries, the concept of 3D scanning is not new but is used only for creating 3D drawings or modelling of physical parts. However, the potential of 3D scanning as a powerful inspection tool is hardly explored. This study is centred on designing a procedure for automated inspection using 3D scanner. Linear, geometric and dimensional inspection of the most popular test bar-stepped bar, as a simple example was also carried out as per the new framework. The new generation engineering industries would definitely welcome this automated inspection procedure being quick and reliable with reduced human intervention.

  1. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  2. Automation of Ocean Product Metrics

    DTIC Science & Technology

    2008-09-30

    Presented in: Ocean Sciences 2008 Conf., 5 Mar 2008. Shriver, J., J. D. Dykes, and J. Fabre: Automation of Operational Ocean Product Metrics. Presented in 2008 EGU General Assembly , 14 April 2008. 9 ...processing (multiple data cuts per day) and multiple-nested models. Routines for generating automated evaluations of model forecast statistics will be...developed and pre-existing tools will be collected to create a generalized tool set, which will include user-interface tools to the metrics data

  3. The third/second generation PTH assay ratio as a marker for parathyroid carcinoma: evaluation using an automated platform.

    PubMed

    Cavalier, Etienne; Betea, Daniela; Schleck, Marie-Louise; Gadisseur, Romy; Vroonen, Laurent; Delanaye, Pierre; Daly, Adrian F; Beckers, Albert

    2014-03-01

    Parathyroid carcinoma (PCa) is rare and often difficult to differentiate initially from benign disease. Because PCa oversecretes amino PTH that is detected by third-generation but not by second-generation PTH assays, the normal 3rd/2nd generation PTH ratio (<1) is inverted in PCa (ie, >1). The objective of the investigation was to study the utility and advantages of automated 3rd/2nd generation PTH ratio measurements using the Liaison XL platform over existing manual techniques. The study was conducted at a tertiary-referral academic center. This was a retrospective laboratory study. Eleven patients with advanced PCa (mean age 56.0 y). The controls were patients with primary-hyperparathyroidism (n = 144; mean age 53.8 y), renal transplantation (n = 41; mean age 50.6 y), hemodialysis (n = 80; mean age 65.2 y), and healthy elderly subjects (n = 40; mean age 72.6 y). The median (interquartile range) 3rd/2nd generation PTH ratio was 1.16 (1.10-1.38) in the PCa group, which was significantly higher than the control groups: hemodialysis: 0.74 (0.71-0.75); renal transplant: 0.77 (0.73-0.79); primary hyperparathyroidism: 0.76 (0.74-0.78); healthy elderly: 0.80 (0.74-0.83). An inverted 3rd/2nd-generation PTH ratio (>1) was seen in 9 of 11 PCa patients (81.8%) and in 7 of 305 controls (2.3%): 3 of 80 hemodialysis (3.8%), and 4 of 144 primary-hyperparathyroidism patients (2.8%). Of four PCa patients who had a normal PTH ratio with the manual method, two had an inverted 3rd/2nd-generation PTH ratio with the automated method. Study of the 3rd/2nd-generation PTH ratio in large patient populations should be feasible using a mainstream automated platform like the Liaison XL. The current study confirms the utility of the inverted 3rd/2nd-generation PTH ratio as a marker of PCa (sensitivity: 81.8%; specificity: 97.3%).

  4. Collecting and Animating Online Satellite Images.

    ERIC Educational Resources Information Center

    Irons, Ralph

    1995-01-01

    Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…

  5. High-resolution monitoring of marine protists based on an observation strategy integrating automated on-board filtration and molecular analyses

    NASA Astrophysics Data System (ADS)

    Metfies, Katja; Schroeder, Friedhelm; Hessel, Johanna; Wollschläger, Jochen; Micheller, Sebastian; Wolf, Christian; Kilias, Estelle; Sprong, Pim; Neuhaus, Stefan; Frickenhaus, Stephan; Petersen, Wilhelm

    2016-11-01

    Information on recent biomass distribution and biogeography of photosynthetic marine protists with adequate temporal and spatial resolution is urgently needed to better understand the consequences of environmental change for marine ecosystems. Here we introduce and review a molecular-based observation strategy for high-resolution assessment of these protists in space and time. It is the result of extensive technology developments, adaptations and evaluations which are documented in a number of different publications, and the results of the recently completed field testing which are introduced in this paper. The observation strategy is organized at four different levels. At level 1, samples are collected at high spatiotemporal resolution using the remotely controlled automated filtration system AUTOFIM. Resulting samples can either be preserved for later laboratory analyses, or directly subjected to molecular surveillance of key species aboard the ship via an automated biosensor system or quantitative polymerase chain reaction (level 2). Preserved samples are analyzed at the next observational levels in the laboratory (levels 3 and 4). At level 3 this involves molecular fingerprinting methods for a quick and reliable overview of differences in protist community composition. Finally, selected samples can be used to generate a detailed analysis of taxonomic protist composition via the latest next generation sequencing technology (NGS) at level 4. An overall integrated dataset of the results based on the different analyses provides comprehensive information on the diversity and biogeography of protists, including all related size classes. At the same time the cost of the observation is optimized with respect to analysis effort and time.

  6. A test matrix sequencer for research test facility automation

    NASA Technical Reports Server (NTRS)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  7. Automation of the temperature elevation test in transformers with insulating oil.

    PubMed

    Vicente, José Manuel Esteves; Rezek, Angelo José Junqueira; de Almeida, Antonio Tadeu Lyrio; Guimarães, Carlos Alberto Mohallem

    2008-01-01

    The automation of the temperature elevation test is outlined here for both the oil temperature elevation and the determination of the winding temperature elevation. While automating this test it is necessary to use four thermometers, one three-phase wattmeter, a motorized voltage variator and a Kelvin bridge to measure the resistance. All the equipments must communicate with a microcomputer, which will have the test program implemented. The system to be outlined here was initially implemented in the laboratory and, due to the good results achieved, is already in use in some transformer manufacturing plants.

  8. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  9. A novel tool for high-throughput screening of granulocyte-specific antibodies using the automated flow cytometric granulocyte immunofluorescence test (Flow-GIFT).

    PubMed

    Nguyen, Xuan Duc; Dengler, Thomas; Schulz-Linkholt, Monika; Klüter, Harald

    2011-02-03

    Transfusion-related acute lung injury (TRALI) is a severe complication related with blood transfusion. TRALI has usually been associated with antibodies against leukocytes. The flow cytometric granulocyte immunofluorescence test (Flow-GIFT) has been introduced for routine use when investigating patients and healthy blood donors. Here we describe a novel tool in the automation of the Flow-GIFT that enables a rapid screening of blood donations. We analyzed 440 sera from healthy female blood donors for the presence of granulocyte antibodies. As positive controls, 12 sera with known antibodies against anti-HNA-1a, -b, -2a; and -3a were additionally investigated. Whole-blood samples from HNA-typed donors were collected and the test cells isolated using cell sedimentation in a Ficoll density gradient. Subsequently, leukocytes were incubated with the respective serum and binding of antibodies was detected using FITC-conjugated antihuman antibody. 7-AAD was used to exclude dead cells. Pipetting steps were automated using the Biomek NXp Multichannel Automation Workstation. All samples were prepared in the 96-deep well plates and analyzed by flow cytometry. The standard granulocyte immunofluorescence test (GIFT) and granulocyte agglutination test (GAT) were also performed as reference methods. Sixteen sera were positive in the automated Flow-GIFT, while five of these sera were negative in the standard GIFT (anti-HNA 3a, n = 3; anti-HNA-1b, n = 1) and GAT (anti-HNA-2a, n = 1). The automated Flow-GIFT was able to detect all granulocyte antibodies, which could be only detected in GIFT in combination with GAT. In serial dilution tests, the automated Flow-GIFT detected the antibodies at higher dilutions than the reference methods GIFT and GAT. The Flow-GIFT proved to be feasible for automation. This novel high-throughput system allows an effective antigranulocyte antibody detection in a large donor population in order to prevent TRALI due to transfusion of blood products.

  10. Evaluation of an Automated System for Reading and Interpreting Disk Diffusion Antimicrobial Susceptibility Testing of Fastidious Bacteria.

    PubMed

    Idelevich, Evgeny A; Becker, Karsten; Schmitz, Janne; Knaack, Dennis; Peters, Georg; Köck, Robin

    2016-01-01

    Results of disk diffusion antimicrobial susceptibility testing depend on individual visual reading of inhibition zone diameters. Therefore, automated reading using camera systems might represent a useful tool for standardization. In this study, the ADAGIO automated system (Bio-Rad) was evaluated for reading disk diffusion tests of fastidious bacteria. 144 clinical isolates (68 β-haemolytic streptococci, 28 Streptococcus pneumoniae, 18 viridans group streptococci, 13 Haemophilus influenzae, 7 Moraxella catarrhalis, and 10 Campylobacter jejuni) were tested on Mueller-Hinton agar supplemented with 5% defibrinated horse blood and 20 mg/L β-NAD (MH-F, Oxoid) according to EUCAST. Plates were read manually with a ruler and automatically using the ADAGIO system. Inhibition zone diameters, indicated by the automated system, were visually controlled and adjusted, if necessary. Among 1548 isolate-antibiotic combinations, comparison of automated vs. manual reading yielded categorical agreement (CA) without visual adjustment of the automatically determined zone diameters in 81.4%. In 20% (309 of 1548) of tests it was deemed necessary to adjust the automatically determined zone diameter after visual control. After adjustment, CA was 94.8%; very major errors (false susceptible interpretation), major errors (false resistant interpretation) and minor errors (false categorization involving intermediate result), calculated according to the ISO 20776-2 guideline, accounted to 13.7% (13 of 95 resistant results), 3.3% (47 of 1424 susceptible results) and 1.4% (21 of 1548 total results), respectively, compared to manual reading. The ADAGIO system allowed for automated reading of disk diffusion testing in fastidious bacteria and, after visual validation of the automated results, yielded good categorical agreement with manual reading.

  11. Model-Based Development of Automotive Electronic Climate Control Software

    NASA Astrophysics Data System (ADS)

    Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan

    With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.

  12. Investigation of the Formability of TRIP780 Steel Sheets

    NASA Astrophysics Data System (ADS)

    Song, Yang

    The formability of a metal sheet is dependent on its work hardening behaviour and its forming limits; and both aspects must be carefully determined in order to accurately simulate a particular forming process. This research aims to characterize the formability of a TRIP780 sheet steel using advanced experimental testing and analysis techniques. A series of flat rolling and tensile tests, as well as shear tests were conducted to determine the large deformation work hardening behaviour of this TRIP780 steel. Nakazima tests were carried out up to fracture to determine the forming limits of this sheet material. A highly-automated method for generating a robust FLC for sheet materials from DIC strain measurements was created with the help of finite element simulations, and evaluated against the conventional method. A correction algorithm that aims to compensate for the process dependent effects in the Nakazima test was implemented and tested with some success.

  13. Orbital Express Advanced Video Guidance Sensor: Ground Testing, Flight Results and Comparisons

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Howard, Richard T.; Heaton, Andrew F.

    2008-01-01

    Orbital Express (OE) was a successful mission demonstrating automated rendezvous and docking. The 2007 mission consisted of two spacecraft, the Autonomous Space Transport Robotic Operations (ASTRO) and the Next Generation Serviceable Satellite (NEXTSat) that were designed to work together and test a variety of service operations in orbit. The Advanced Video Guidance Sensor, AVGS, was included as one of the primary proximity navigation sensors on board the ASTRO. The AVGS was one of four sensors that provided relative position and attitude between the two vehicles. Marshall Space Flight Center was responsible for the AVGS software and testing (especially the extensive ground testing), flight operations support, and analyzing the flight data. This paper briefly describes the historical mission, the data taken on-orbit, the ground testing that occurred, and finally comparisons between flight data and ground test data for two different flight regimes.

  14. Comparison of BrainTool to other UML modeling and model transformation tools

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  15. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  16. Fast and Efficient Fragment-Based Lead Generation by Fully Automated Processing and Analysis of Ligand-Observed NMR Binding Data.

    PubMed

    Peng, Chen; Frommlet, Alexandra; Perez, Manuel; Cobas, Carlos; Blechschmidt, Anke; Dominguez, Santiago; Lingel, Andreas

    2016-04-14

    NMR binding assays are routinely applied in hit finding and validation during early stages of drug discovery, particularly for fragment-based lead generation. To this end, compound libraries are screened by ligand-observed NMR experiments such as STD, T1ρ, and CPMG to identify molecules interacting with a target. The analysis of a high number of complex spectra is performed largely manually and therefore represents a limiting step in hit generation campaigns. Here we report a novel integrated computational procedure that processes and analyzes ligand-observed proton and fluorine NMR binding data in a fully automated fashion. A performance evaluation comparing automated and manual analysis results on (19)F- and (1)H-detected data sets shows that the program delivers robust, high-confidence hit lists in a fraction of the time needed for manual analysis and greatly facilitates visual inspection of the associated NMR spectra. These features enable considerably higher throughput, the assessment of larger libraries, and shorter turn-around times.

  17. A performance and failure analysis of SAPHIRE with a MEDLINE test collection.

    PubMed Central

    Hersh, W R; Hickam, D H; Haynes, R B; McKibbon, K A

    1994-01-01

    OBJECTIVE: Assess the performance of the SAPHIRE automated information retrieval system. DESIGN: Comparative study of automated and human searching of a MEDLINE test collection. MEASUREMENTS: Recall and precision of SAPHIRE were compared with those attributes of novice physicians, expert physicians, and librarians for a test collection of 75 queries and 2,334 citations. Failure analysis assessed the efficacy of the Metathesaurus as a concept vocabulary; the reasons for retrieval of nonrelevant articles and nonretrieval of relevant articles; and the effect of changing the weighting formula for relevance ranking of retrieved articles. RESULTS: Recall and precision of SAPHIRE were comparable to those of both physician groups, but less than those of librarians. CONCLUSION: The current version of the Metathesaurus, as utilized by SAPHIRE, was unable to represent the conceptual content of one-fourth of physician-generated MEDLINE queries. The most likely cause for retrieval of nonrelevant articles was the presence of some or all of the search terms in the article, with frequencies high enough to lead to retrieval. The most likely cause for nonretrieval of relevant articles was the absence of the actual terms from the query, with synonyms or hierarchically related terms present instead. There were significant variations in performance when SAPHIRE's concept-weighing formulas were modified. PMID:7719787

  18. A flight test design for studying airborne applications of air to ground duplex data link communications

    NASA Technical Reports Server (NTRS)

    Scanlon, Charles H.

    1988-01-01

    The Automatic En Route Air Traffic Control (AERA) and the Advanced Automated System (AAS) of the NAS plan, call for utilization of data links for such items as computer generated flight clearances, enroute minimum safe altitude warnings, sector probes, out of conformance check, automated flight services, and flow management of advisories. A major technical challenge remaining is the integration, flight testing, and validation of data link equipment and procedures in the aircraft cockpit. The flight test organizational chart, was designed to have the airplane side of data link experiments implemented in the NASA Langley Research Center (LaRC) experimental Boeing 737 airplane. This design would enable investigations into implementation of data link equipment and pilot interface, operations, and procedures. The illustrated ground system consists of a work station with links to a national weather database and a data link transceiver system. The data link transceiver system could be a Mode-S transponder, ACARS, AVSAT, or another type of radio system such as the military type HF data link. The airborne system was designed so that a data link transceiver, workstation, and touch panel could be interfaced with an input output processor to the aircraft system bus and thus have communications access to other digital airplane systems.

  19. Flight-deck automation - Promises and problems

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.; Curry, R. E.

    1980-01-01

    The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.

  20. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.

Top