Space Telecommunications Radio System (STRS) Compliance Testing
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2011-01-01
The Space Telecommunications Radio System (STRS) defines an open architecture for software defined radios. This document describes the testing methodology to aid in determining the degree of compliance to the STRS architecture. Non-compliances are reported to the software and hardware developers as well as the NASA project manager so that any non-compliances may be fixed or waivers issued. Since the software developers may be divided into those that provide the operating environment including the operating system and STRS infrastructure (OE) and those that supply the waveform applications, the tests are divided accordingly. The static tests are also divided by the availability of an automated tool that determines whether the source code and configuration files contain the appropriate items. Thus, there are six separate step-by-step test procedures described as well as the corresponding requirements that they test. The six types of STRS compliance tests are: STRS application automated testing, STRS infrastructure automated testing, STRS infrastructure testing by compiling WFCCN with the infrastructure, STRS configuration file testing, STRS application manual code testing, and STRS infrastructure manual code testing. Examples of the input and output of the scripts are shown in the appendices as well as more specific information about what to configure and test in WFCCN for non-compliance. In addition, each STRS requirement is listed and the type of testing briefly described. Attached is also a set of guidelines on what to look for in addition to the requirements to aid in the document review process.
NASA Astrophysics Data System (ADS)
Niggemann, F.; Appel, F.; Bach, H.; de la Mar, J.; Schirpke, B.; Dutting, K.; Rucker, G.; Leimbach, D.
2015-04-01
To address the challenges of effective data handling faced by Small and Medium Sized Enterprises (SMEs) a cloud-based infrastructure for accessing and processing of Earth Observation(EO)-data has been developed within the project APPS4GMES(www.apps4gmes.de). To gain homogenous multi mission data access an Input Data Portal (IDP) been implemented on this infrastructure. The IDP consists of an Open Geospatial Consortium (OGC) conformant catalogue, a consolidation module for format conversion and an OGC-conformant ordering framework. Metadata of various EO-sources and with different standards is harvested and transferred to an OGC conformant Earth Observation Product standard and inserted into the catalogue by a Metadata Harvester. The IDP can be accessed for search and ordering of the harvested datasets by the services implemented on the cloud infrastructure. Different land-surface services have been realised by the project partners, using the implemented IDP and cloud infrastructure. Results of these are customer ready products, as well as pre-products (e.g. atmospheric corrected EO data), serving as a basis for other services. Within the IDP an automated access to ESA's Sentinel-1 Scientific Data Hub has been implemented. Searching and downloading of the SAR data can be performed in an automated way. With the implementation of the Sentinel-1 Toolbox and own software, for processing of the datasets for further use, for example for Vista's snow monitoring, delivering input for the flood forecast services, can also be performed in an automated way. For performance tests of the cloud environment a sophisticated model based atmospheric correction and pre-classification service has been implemented. Tests conducted an automated synchronised processing of one entire Landsat 8 (LS-8) coverage for Germany and performance comparisons to standard desktop systems. Results of these tests, showing a performance improvement by the factor of six, proved the high flexibility and computing power of the cloud environment. To make full use of the cloud capabilities a possibility for automated upscaling of the hardware resources has been implemented. Together with the IDP infrastructure fast and automated processing of various satellite sources to deliver market ready products can be realised, thus increasing customer needs and numbers can be satisfied without loss of accuracy and quality.
Key Management Infrastructure Increment 2 (KMI Inc 2)
2016-03-01
2016 Major Automated Information System Annual Report Key Management Infrastructure Increment 2 (KMI Inc 2) Defense Acquisition Management...PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then...Assigned: April 6, 2015 Program Information Program Name Key Management Infrastructure Increment 2 (KMI Inc 2) DoD Component DoD The acquiring DoD
Spaceport Command and Control System Automation Testing
NASA Technical Reports Server (NTRS)
Plano, Tom
2017-01-01
The goal of automated testing is to create and maintain a cohesive infrastructure of robust tests that could be run independently on a software package in its entirety. To that end, the Spaceport Command and Control System (SCCS) project at the National Aeronautics and Space Administration's (NASA) Kennedy Space Center (KSC) has brought in a large group of interns to work side-by-side with full time employees to do just this work. Thus, our job is to implement the tests that will put SCCS through its paces.
The Electrolyte Genome project: A big data approach in battery materials discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xiaohui; Jain, Anubhav; Rajput, Nav Nidhi
2015-06-01
We present a high-throughput infrastructure for the automated calculation of molecular properties with a focus on battery electrolytes. The infrastructure is largely open-source and handles both practical aspects (input file generation, output file parsing, and information management) as well as more complex problems (structure matching, salt complex generation, and failure recovery). Using this infrastructure, we have computed the ionization potential (IP) and electron affinities (EA) of 4830 molecules relevant to battery electrolytes (encompassing almost 55,000 quantum mechanics calculations) at the B3LYP/6-31+G(*) level. We describe automated workflows for computing redox potential, dissociation constant, and salt-molecule binding complex structure generation. We presentmore » routines for automatic recovery from calculation errors, which brings the failure rate from 9.2% to 0.8% for the QChem DFT code. Automated algorithms to check duplication between two arbitrary molecules and structures are described. We present benchmark data on basis sets and functionals on the G2-97 test set; one finding is that a IP/EA calculation method that combines PBE geometry optimization and B3LYP energy evaluation requires less computational cost and yields nearly identical results as compared to a full B3LYP calculation, and could be suitable for the calculation of large molecules. Our data indicates that among the 8 functionals tested, XYGJ-OS and B3LYP are the two best functionals to predict IP/EA with an RMSE of 0.12 and 0.27 eV, respectively. Application of our automated workflow to a large set of quinoxaline derivative molecules shows that functional group effect and substitution position effect can be separated for IP/EA of quinoxaline derivatives, and the most sensitive position is different for IP and EA. Published by Elsevier B.V« less
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Painho, M.
2017-09-01
The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.
Cislunar space infrastructure: Lunar technologies
NASA Technical Reports Server (NTRS)
Faller, W.; Hoehn, A.; Johnson, S.; Moos, P.; Wiltberger, N.
1989-01-01
Continuing its emphasis on the creation of a cisluar infrastructure as an appropriate and cost-effective method of space exploration and development, the University of Colorado explores the technologies necessary for the creation of such an infrastructure, namely (1) automation and robotics; (2) life support systems; (3) fluid management; (4) propulsion; and (5) rotating technologes. The technological focal point is on the development of automated and robotic systems for the implementation of a Lunar Oasis produced by automation and robotics (LOARS). Under direction from the NASA Office of Exploration, automation and robotics have been extensively utilized as an initiating stage in the return to the Moon. A pair of autonomous rovers, modular in design and built from interchangeable and specialized components, is proposed. Utilizing a 'buddy system', these rovers will be able to support each other and to enhance their individual capabilities. One rover primarily explores and maps while the second rover tests the feasibility of various materials-processing techniques. The automated missions emphasize availability and potential uses of lunar resources and the deployment and operations of the LOAR program. An experimental bio-volume is put into place as the precursor to a Lunar Environmentally Controlled Life Support System. The bio-volume will determine the reproduction, growth and production characteristics of various life forms housed on the lunar surface. Physiochemical regenerative technologies and stored resources will be used to buffer biological disturbances of the bio-volume environment. The in situ lunar resources will be both tested and used within this bio-volume. Second phase development on the lunar surface calls for manned operations. Repairs and reconfiguration of the initial framework will ensue. An autonomously initiated, manned Lunar Oasis can become an essential component of the United States space program. The Lunar Oasis will provide support to science, technology, and commerce. It will enable more cost-effective space exploration to the planets and beyond.
A Method of Separation Assurance for Instrument Flight Procedures at Non-Radar Airports
NASA Technical Reports Server (NTRS)
Conway, Sheila R.; Consiglio, Maria
2002-01-01
A method to provide automated air traffic separation assurance services during approach to or departure from a non-radar, non-towered airport environment is described. The method is constrained by provision of these services without radical changes or ambitious investments in current ground-based technologies. The proposed procedures are designed to grant access to a large number of airfields that currently have no or very limited access under Instrument Flight Rules (IFR), thus increasing mobility with minimal infrastructure investment. This paper primarily addresses a low-cost option for airport and instrument approach infrastructure, but is designed to be an architecture from which a more efficient, albeit more complex, system may be developed. A functional description of the capabilities in the current NAS infrastructure is provided. Automated terminal operations and procedures are introduced. Rules of engagement and the operations are defined. Results of preliminary simulation testing are presented. Finally, application of the method to more terminal-like operations, and major research areas, including necessary piloted studies, are discussed.
WLCG scale testing during CMS data challenges
NASA Astrophysics Data System (ADS)
Gutsche, O.; Hajdu, C.
2008-07-01
The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.
CIS-lunar space infrastructure lunar technologies: Executive summary
NASA Technical Reports Server (NTRS)
Faller, W.; Hoehn, A.; Johnson, S.; Moos, P.; Wiltberger, N.
1989-01-01
Technologies necessary for the creation of a cis-Lunar infrastructure, namely: (1) automation and robotics; (2) life support systems; (3) fluid management; (4) propulsion; and (5) rotating technologies, are explored. The technological focal point is on the development of automated and robotic systems for the implementation of a Lunar Oasis produced by Automation and Robotics (LOAR). Under direction from the NASA Office of Exploration, automation and robotics were extensively utilized as an initiating stage in the return to the Moon. A pair of autonomous rovers, modular in design and built from interchangeable and specialized components, is proposed. Utilizing a buddy system, these rovers will be able to support each other and to enhance their individual capabilities. One rover primarily explores and maps while the second rover tests the feasibility of various materials-processing techniques. The automated missions emphasize availability and potential uses of Lunar resources, and the deployment and operations of the LOAR program. An experimental bio-volume is put into place as the precursor to a Lunar environmentally controlled life support system. The bio-volume will determine the reproduction, growth and production characteristics of various life forms housed on the Lunar surface. Physicochemical regenerative technologies and stored resources will be used to buffer biological disturbances of the bio-volume environment. The in situ Lunar resources will be both tested and used within this bio-volume. Second phase development on the Lunar surface calls for manned operations. Repairs and re-configuration of the initial framework will ensue. An autonomously-initiated manned Lunar oasis can become an essential component of the United States space program.
Testing as a Service with HammerCloud
NASA Astrophysics Data System (ADS)
Medrano Llamas, Ramón; Barrand, Quentin; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco; Sciabà, Andrea; van der Ster, Daniel
2014-06-01
HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements. The first section will review the architectural changes that a service running in the cloud needs, such an orchestration service or new storage requirements in order to provide functional and stress testing. The second section will review the first tests of infrastructure providers on the perspective of the challenges discovered from the architectural point of view. Finally, the third section will evaluate future requirements of scalability and features to increase testing productivity.
A Flight Control System Architecture for the NASA AirSTAR Flight Test Infrastructure
NASA Technical Reports Server (NTRS)
Murch, Austin M.
2008-01-01
A flight control system architecture for the NASA AirSTAR infrastructure has been designed to address the challenges associated with safe and efficient flight testing of research control laws in adverse flight conditions. The AirSTAR flight control system provides a flexible framework that enables NASA Aviation Safety Program research objectives, and includes the ability to rapidly integrate and test research control laws, emulate component or sensor failures, inject automated control surface perturbations, and provide a baseline control law for comparison to research control laws and to increase operational efficiency. The current baseline control law uses an angle of attack command augmentation system for the pitch axis and simple stability augmentation for the roll and yaw axes.
Scaling the PuNDIT project for wide area deployments
NASA Astrophysics Data System (ADS)
McKee, Shawn; Batista, Jorge; Carcassi, Gabriele; Dovrolis, Constantine; Lee, Danny
2017-10-01
In today’s world of distributed scientific collaborations, there are many challenges to providing reliable inter-domain network infrastructure. Network operators use a combination of active monitoring and trouble tickets to detect problems, but these are often ineffective at identifying issues that impact wide-area network users. Additionally, these approaches do not scale to wide area inter-domain networks due to unavailability of data from all the domains along typical network paths. The Pythia Network Diagnostic InfrasTructure (PuNDIT) project aims to create a scalable infrastructure for automating the detection and localization of problems across these networks. The project goal is to gather and analyze metrics from existing perfSONAR monitoring infrastructures to identify the signatures of possible problems, locate affected network links, and report them to the user in an intuitive fashion. Simply put, PuNDIT seeks to convert complex network metrics into easily understood diagnoses in an automated manner. We present our progress in creating the PuNDIT system and our status in developing, testing and deploying PuNDIT. We report on the project progress to-date, describe the current implementation architecture and demonstrate some of the various user interfaces it will support. We close by discussing the remaining challenges and next steps and where we see the project going in the future.
BioBlend: automating pipeline analyses within Galaxy and CloudMan.
Sloggett, Clare; Goonasekera, Nuwan; Afgan, Enis
2013-07-01
We present BioBlend, a unified API in a high-level language (python) that wraps the functionality of Galaxy and CloudMan APIs. BioBlend makes it easy for bioinformaticians to automate end-to-end large data analysis, from scratch, in a way that is highly accessible to collaborators, by allowing them to both provide the required infrastructure and automate complex analyses over large datasets within the familiar Galaxy environment. http://bioblend.readthedocs.org/. Automated installation of BioBlend is available via PyPI (e.g. pip install bioblend). Alternatively, the source code is available from the GitHub repository (https://github.com/afgane/bioblend) under the MIT open source license. The library has been tested and is working on Linux, Macintosh and Windows-based systems.
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
Kwak, Jihoon; Genovesio, Auguste; Kang, Myungjoo; Hansen, Michael Adsett Edberg; Han, Sung-Jun
2015-01-01
Genotoxicity testing is an important component of toxicity assessment. As illustrated by the European registration, evaluation, authorization, and restriction of chemicals (REACH) directive, it concerns all the chemicals used in industry. The commonly used in vivo mammalian tests appear to be ill adapted to tackle the large compound sets involved, due to throughput, cost, and ethical issues. The somatic mutation and recombination test (SMART) represents a more scalable alternative, since it uses Drosophila, which develops faster and requires less infrastructure. Despite these advantages, the manual scoring of the hairs on Drosophila wings required for the SMART limits its usage. To overcome this limitation, we have developed an automated SMART readout. It consists of automated imaging, followed by an image analysis pipeline that measures individual wing genotoxicity scores. Finally, we have developed a wing score-based dose-dependency approach that can provide genotoxicity profiles. We have validated our method using 6 compounds, obtaining profiles almost identical to those obtained from manual measures, even for low-genotoxicity compounds such as urethane. The automated SMART, with its faster and more reliable readout, fulfills the need for a high-throughput in vivo test. The flexible imaging strategy we describe and the analysis tools we provide should facilitate the optimization and dissemination of our methods. PMID:25830368
2015-08-18
SECURITY CLASSIFICATION OF: Arena 60 Discrete Photometric Analyzer System and ancillary instrumentation were acquired to increase our analytical...Infrastructure at West Virginia State University Report Title Arena 60 Discrete Photometric Analyzer System and ancillary instrumentation were acquired...Progress Principal Accomplishments: a. One Postdoctoral fellow was trained using the automated Arena 60 Discrete Photometric Analyzer and
Besada, Juan A.; Bergesio, Luca; Campaña, Iván; Vaquero-Melchor, Diego; Bernardos, Ana M.; Casar, José R.
2018-01-01
This paper describes a Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones. The mission definition aims at improving planning efficiency with respect to state-of-the-art waypoint-based techniques, using high-level mission definition primitives and linking them with realistic flight models to simulate the inspection in advance. It also provides flight scripts and measurement plans which can be executed by commercial drones. Its user interfaces facilitate mission definition, pre-flight 3D synthetic mission visualisation and flight evaluation. Results are delivered for a set of representative infrastructure inspection flights, showing the accuracy of the flight prediction tools in actual operations using automated flight control. PMID:29641506
Besada, Juan A; Bergesio, Luca; Campaña, Iván; Vaquero-Melchor, Diego; López-Araquistain, Jaime; Bernardos, Ana M; Casar, José R
2018-04-11
This paper describes a Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones. The mission definition aims at improving planning efficiency with respect to state-of-the-art waypoint-based techniques, using high-level mission definition primitives and linking them with realistic flight models to simulate the inspection in advance. It also provides flight scripts and measurement plans which can be executed by commercial drones. Its user interfaces facilitate mission definition, pre-flight 3D synthetic mission visualisation and flight evaluation. Results are delivered for a set of representative infrastructure inspection flights, showing the accuracy of the flight prediction tools in actual operations using automated flight control.
Automated rendezvous and capture development infrastructure
NASA Technical Reports Server (NTRS)
Bryan, Thomas C.; Roe, Fred; Coker, Cynthia
1992-01-01
The facilities at Marshall Space Flight Center and JSC to be utilized to develop and test an autonomous rendezvous and capture (ARC) system are described. This includes equipment and personnel facility capabilities to devise, develop, qualify, and integrate ARC elements and subsystems into flight programs. Attention is given to the use of a LEO test facility, the current concept and unique system elements of the ARC, and the options available to develop ARC technology.
Automated crack detection in conductive smart-concrete structures using a resistor mesh model
NASA Astrophysics Data System (ADS)
Downey, Austin; D'Alessandro, Antonella; Ubertini, Filippo; Laflamme, Simon
2018-03-01
Various nondestructive evaluation techniques are currently used to automatically detect and monitor cracks in concrete infrastructure. However, these methods often lack the scalability and cost-effectiveness over large geometries. A solution is the use of self-sensing carbon-doped cementitious materials. These self-sensing materials are capable of providing a measurable change in electrical output that can be related to their damage state. Previous work by the authors showed that a resistor mesh model could be used to track damage in structural components fabricated from electrically conductive concrete, where damage was located through the identification of high resistance value resistors in a resistor mesh model. In this work, an automated damage detection strategy that works through placing high value resistors into the previously developed resistor mesh model using a sequential Monte Carlo method is introduced. Here, high value resistors are used to mimic the internal condition of damaged cementitious specimens. The proposed automated damage detection method is experimentally validated using a 500 × 500 × 50 mm3 reinforced cement paste plate doped with multi-walled carbon nanotubes exposed to 100 identical impact tests. Results demonstrate that the proposed Monte Carlo method is capable of detecting and localizing the most prominent damage in a structure, demonstrating that automated damage detection in smart-concrete structures is a promising strategy for real-time structural health monitoring of civil infrastructure.
An automated repair method of water pipe infrastructure using carbon fiber bundles
NASA Astrophysics Data System (ADS)
Wisotzkey, Sean; Carr, Heath; Fyfe, Ed
2011-04-01
The United States water pipe infrastructure is made up of over 2 million miles of pipe. Due to age and deterioration, a large portion of this pipe is in need of repair to prevent catastrophic failures. Current repair methods generally involve intrusive techniques that can be time consuming and costly, but also can cause major societal impacts. A new automated repair method incorporating innovative carbon fiber technology is in development. This automated method would eliminate the need for trenching and would vastly cut time and labor costs, providing a much more economical pipe repair solution.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
Railroad infrastructure trespass detection performance guidelines
DOT National Transportation Integrated Search
2011-01-01
The U.S. Department of Transportations John A. Volpe National Transportation Systems Center, under the direction of the Federal Railroad Administration, conducted a 3-year demonstration of an automated prototype railroad infrastructure security sy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Wang, Hong; Young, Stan
Documenting existing state of practice is an initial step in developing future control infrastructure to be co-deployed for heterogeneous mix of connected and automated vehicles with human drivers while leveraging benefits to safety, congestion, and energy. With advances in information technology and extensive deployment of connected and automated vehicle technology anticipated over the coming decades, cities globally are making efforts to plan and prepare for these transitions. CAVs not only offer opportunities to improve transportation systems through enhanced safety and efficient operations of vehicles. There are also significant needs in terms of exploring how best to leverage vehicle-to-vehicle (V2V) technology,more » vehicle-to-infrastructure (V2I) technology and vehicle-to-everything (V2X) technology. Both Connected Vehicle (CV) and Connected and Automated Vehicle (CAV) paradigms feature bi-directional connectivity and share similar applications in terms of signal control algorithm and infrastructure implementation. The discussion in our synthesis study assumes the CAV/CV context where connectivity exists with or without automated vehicles. Our synthesis study explores the current state of signal control algorithms and infrastructure, reports the completed and newly proposed CV/CAV deployment studies regarding signal control schemes, reviews the deployment costs for CAV/AV signal infrastructure, and concludes with a discussion on the opportunities such as detector free signal control schemes and dynamic performance management for intersections, and challenges such as dependency on market adaptation and the need to build a fault-tolerant signal system deployment in a CAV/CV environment. The study will serve as an initial critical assessment of existing signal control infrastructure (devices, control instruments, and firmware) and control schemes (actuated, adaptive, and coordinated-green wave). Also, the report will help to identify the future needs for the signal infrastructure to act as the nervous system for urban transportation networks, providing not only signaling, but also observability, surveillance, and measurement capacity. The discussion of the opportunities space includes network optimization and control theory perspectives, and the current states of observability for key system parameters (what can be detected, how frequently can it be reported) as well as controllability of dynamic parameters (this includes adjusting not only the signal phase and timing, but also the ability to alter vehicle trajectories through information or direct control). The perspective of observability and controllability of the dynamic systems provides an appropriate lens to discuss future directions as CAV/CV become more prevalent in the future.« less
Automation of University Libraries in Kerala: Status, Problems and Prospects
ERIC Educational Resources Information Center
Suku, J.; Pillai, Mini G.
2005-01-01
This paper discusses the present scenario of automation activities of university libraries in Kerala. The survey findings mainly cover various aspects of library automation such as information technology infrastructure, in-house activities, information services and their usage, manpower development, and budget. The paper briefly describes the role…
Cloud Environment Automation: from infrastructure deployment to application monitoring
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.
2017-10-01
The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.
Railroad infrastructure trespassing detection systems research in Pittsford, New York
DOT National Transportation Integrated Search
2006-08-01
The U.S. Department of Transportations Volpe National Transportation Systems Center, under the direction of the Federal Railroad Administration, conducted a 3-year demonstration of an automated prototype railroad infrastructure security system on ...
Production Maintenance Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason Gabler, David Skinner
2005-11-01
PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less
The LabTube - a novel microfluidic platform for assay automation in laboratory centrifuges.
Kloke, A; Fiebach, A R; Zhang, S; Drechsel, L; Niekrawietz, S; Hoehl, M M; Kneusel, R; Panthel, K; Steigert, J; von Stetten, F; Zengerle, R; Paust, N
2014-05-07
Assay automation is the key for successful transformation of modern biotechnology into routine workflows. Yet, it requires considerable investment in processing devices and auxiliary infrastructure, which is not cost-efficient for laboratories with low or medium sample throughput or point-of-care testing. To close this gap, we present the LabTube platform, which is based on assay specific disposable cartridges for processing in laboratory centrifuges. LabTube cartridges comprise interfaces for sample loading and downstream applications and fluidic unit operations for release of prestored reagents, mixing, and solid phase extraction. Process control is achieved by a centrifugally-actuated ballpen mechanism. To demonstrate the workflow and functionality of the LabTube platform, we show two LabTube automated sample preparation assays from laboratory routines: DNA extractions from whole blood and purification of His-tagged proteins. Equal DNA and protein yields were observed compared to manual reference runs, while LabTube automation could significantly reduce the hands-on-time to one minute per extraction.
DOT National Transportation Integrated Search
1997-04-01
The infrastructure on which American society depends, in sectors such as transportation, finance, energy, and telecommunications is becoming increasingly automated as advances in information technology open up new possibilities for improved service, ...
Green Infrastructure Design Evaluation Using the Automated Geospatial Watershed Assessment Tool
In arid and semi-arid regions, green infrastructure (GI) can address several issues facing urban environments, including augmenting water supply, mitigating flooding, decreasing pollutant loads, and promoting greenness in the built environment. An optimum design captures stormwat...
Evaluation of Green Infrastructure Designs Using the Automated Geospatial Watershed Assessment Tool
In arid and semi-arid regions, green infrastructure (GI) can address several issues facing urban environments, including augmenting water supply, mitigating flooding, decreasing pollutant loads, and promoting greenness in the built environment. An optimum design captures stormwat...
IEEE TRANSACTIONS ON CYBERNETICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig R. RIeger; David H. Scheidt; William D. Smart
2014-11-01
MODERN societies depend on complex and critical infrastructures for energy, transportation, sustenance, medical care, emergency response, communications security. As computers, automation, and information technology (IT) have advanced, these technologies have been exploited to enhance the efficiency of operating the processes that make up these infrastructures
Joelsson, Daniel; Gates, Irina V; Pacchione, Diana; Wang, Christopher J; Bennett, Philip S; Zhang, Yuhua; McMackin, Jennifer; Frey, Tina; Brodbeck, Kristin C; Baxter, Heather; Barmat, Scott L; Benetti, Luca; Bodmer, Jean-Luc
2010-06-01
Vaccine manufacturing requires constant analytical monitoring to ensure reliable quality and a consistent safety profile of the final product. Concentration and bioactivity of active components of the vaccine are key attributes routinely evaluated throughout the manufacturing cycle and for product release and dosage. In the case of live attenuated virus vaccines, bioactivity is traditionally measured in vitro by infection of susceptible cells with the vaccine followed by quantification of virus replication, cytopathology or expression of viral markers. These assays are typically multi-day procedures that require trained technicians and constant attention. Considering the need for high volumes of testing, automation and streamlining of these assays is highly desirable. In this study, the automation and streamlining of a complex infectivity assay for Varicella Zoster Virus (VZV) containing test articles is presented. The automation procedure was completed using existing liquid handling infrastructure in a modular fashion, limiting custom-designed elements to a minimum to facilitate transposition. In addition, cellular senescence data provided an optimal population doubling range for long term, reliable assay operation at high throughput. The results presented in this study demonstrate a successful automation paradigm resulting in an eightfold increase in throughput while maintaining assay performance characteristics comparable to the original assay. Copyright 2010 Elsevier B.V. All rights reserved.
New York City Transit Authority automated transit infrastructure maintenance demonstration.
DOT National Transportation Integrated Search
2009-04-01
The objective of this pilot project was to demonstrate that the safety and reliability of the New York City : Transit transportation system can be improved by automating the correlation and analysis of disparate : track related data. Through the use ...
An assessment of autonomous vehicles : traffic impacts and infrastructure needs : final report.
DOT National Transportation Integrated Search
2017-03-01
The project began by understanding the current state of practice and trends. NHTSAs four-level taxonomy for automated vehicles was used to classify smart driving technologies and infrastructure needs. The project used surveys to analyze and gain a...
Evaluation of green infrastructure designs using the Automated Geospatial Watershed Assessment Tool
USDA-ARS?s Scientific Manuscript database
In arid and semi-arid regions, green infrastructure (GI) designs can address several issues facing urban environments, including augmenting water supply, mitigating flooding, decreasing pollutant loads, and promoting greenness in the built environment. An optimum design captures stormwater, addressi...
Massachusetts Institute of Technology Consortium Agreement
1999-03-01
This is the third progress report of the M.I.T. Home Automation and Healthcare Consortium-Phase Two. It covers majority of the new findings, concepts...research projects of home automation and healthcare, ranging from human modeling, patient monitoring, and diagnosis to new sensors and actuators, physical...aids, human-machine interface and home automation infrastructure. This report contains several patentable concepts, algorithms, and designs.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
Automated sensor networks to advance ocean science
NASA Astrophysics Data System (ADS)
Schofield, O.; Orcutt, J. A.; Arrott, M.; Vernon, F. L.; Peach, C. L.; Meisinger, M.; Krueger, I.; Kleinert, J.; Chao, Y.; Chien, S.; Thompson, D. R.; Chave, A. D.; Balasuriya, A.
2010-12-01
The National Science Foundation has funded the Ocean Observatories Initiative (OOI), which over the next five years will deploy infrastructure to expand scientist’s ability to remotely study the ocean. The deployed infrastructure will be linked by a robust cyberinfrastructure (CI) that will integrate marine observatories into a coherent system-of-systems. OOI is committed to engaging the ocean sciences community during the construction pahse. For the CI, this is being enabled by using a “spiral design strategy” allowing for input throughout the construction phase. In Fall 2009, the OOI CI development team used an existing ocean observing network in the Mid-Atlantic Bight (MAB) to test OOI CI software. The objective of this CI test was to aggregate data from ships, autonomous underwater vehicles (AUVs), shore-based radars, and satellites and make it available to five different data-assimilating ocean forecast models. Scientists used these multi-model forecasts to automate future glider missions in order to demonstrate the feasibility of two-way interactivity between the sensor web and predictive models. The CI software coordinated and prioritized the shared resources that allowed for the semi-automated reconfiguration of assett-tasking, and thus enabled an autonomous execution of observation plans for the fixed and mobile observation platforms. Efforts were coordinated through a web portal that provided an access point for the observational data and model forecasts. Researchers could use the CI software in tandem with the web data portal to assess the performance of individual numerical model results, or multi-model ensembles, through real-time comparisons with satellite, shore-based radar, and in situ robotic measurements. The resulting sensor net will enable a new means to explore and study the world’s oceans by providing scientists a responsive network in the world’s oceans that can be accessed via any wireless network.
Manual and automation testing and verification of TEQ [ECI PROPIRETRY
NASA Astrophysics Data System (ADS)
Abhichandra, Ravi; Jasmine Pemeena Priyadarsini, M.
2017-11-01
The telecommunication industry has progressed from 1G to 4G and now 5G is gaining prominence. Given the pace of this abrupt transformation, technological obsolescence is becoming a serious issue to deal with. Adding to this fact is that the execution of each technology requires ample investment into network, infrastructure, development etc. As a result, the industry is becoming more dynamic and strategy oriented. It requires professionals who not only can understand technology but also can evaluate the same from a business perspective. The “Information Revolution” and the dramatic advances in telecommunications technology, which has made this possible, currently drive the global economy in large part. As wireless networks become more advanced and far-reaching, we are redefining the notion of connectivity and the possibilities of communications technology. In this paper I test and verify the optical cards and automate this test procedure by using a new in-house technology “TEQ” developed by ECI TELECOM which uses one the optical cards itself to pump traffic of 100gbps.
A Framework for Testing Automated Detection, Diagnosis, and Remediation Systems on the Smart Grid
NASA Technical Reports Server (NTRS)
Lau, Shing-hon
2011-01-01
America's electrical grid is currently undergoing a multi-billion dollar modernization effort aimed at producing a highly reliable critical national infrastructure for power - a Smart Grid. While the goals for the Smart Grid include upgrades to accommodate large quantities of clean, but transient, renewable energy and upgrades to provide customers with real-time pricing information, perhaps the most important objective is to create an electrical grid with a greatly increased robustness.
Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing
NASA Technical Reports Server (NTRS)
Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane
2012-01-01
Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.
NASA Astrophysics Data System (ADS)
Pamulaparthy, Balakrishna; KS, Swarup; Kommu, Rajagopal
2014-12-01
Distribution automation (DA) applications are limited to feeder level today and have zero visibility outside of the substation feeder and reaching down to the low-voltage distribution network level. This has become a major obstacle in realizing many automated functions and enhancing existing DA capabilities. Advanced metering infrastructure (AMI) systems are being widely deployed by utilities across the world creating system-wide communications access to every monitoring and service point, which collects data from smart meters and sensors in short time intervals, in response to utility needs. DA and AMI systems convergence provides unique opportunities and capabilities for distribution grid modernization with the DA system acting as a controller and AMI system acting as feedback to DA system, for which DA applications have to understand and use the AMI data selectively and effectively. In this paper, we propose a load segmentation method that helps the DA system to accurately understand and use the AMI data for various automation applications with a suitable case study on power restoration.
DOT National Transportation Integrated Search
2015-09-23
This research project aimed to develop a remote sensing system capable of rapidly identifying fine-scale damage to critical transportation infrastructure following hazard events. Such a system must be pre-planned for rapid deployment, automate proces...
NASA Astrophysics Data System (ADS)
Farrell, K. W.
2015-10-01
The proposed Chryse Planitia EZ centered near the VL-1 landing site has evidence for adequate water ice, silica, and load-bearing bedrock surface resources to utilize as infrastructure for long-term missions to support humans.
NextGen Technologies on the FAA's Standard Terminal Automation Replacement System
NASA Technical Reports Server (NTRS)
Witzberger, Kevin; Swenson, Harry; Martin, Lynne; Lin, Melody; Cheng, Jinn-Hwei
2014-01-01
This paper describes the integration, evaluation, and results from a high-fidelity human-in-the-loop (HITL) simulation of key NASA Air Traffic Management Technology Demonstration - 1 (ATD- 1) technologies implemented in an enhanced version of the FAA's Standard Terminal Automation Replacement System (STARS) platform. These ATD-1 technologies include: (1) a NASA enhanced version of the FAA's Time-Based Flow Management, (2) a NASA ground-based automation technology known as controller-managed spacing (CMS), and (3) a NASA advanced avionics airborne technology known as flight-deck interval management (FIM). These ATD-1 technologies have been extensively tested in large-scale HITL simulations using general-purpose workstations to study air transportation technologies. These general purpose workstations perform multiple functions and are collectively referred to as the Multi-Aircraft Control System (MACS). Researchers at NASA Ames Research Center and Raytheon collaborated to augment the STARS platform by including CMS and FIM advisory tools to validate the feasibility of integrating these automation enhancements into the current FAA automation infrastructure. NASA Ames acquired three STARS terminal controller workstations, and then integrated the ATD-1 technologies. HITL simulations were conducted to evaluate the ATD-1 technologies when using the STARS platform. These results were compared with the results obtained when the ATD-1 technologies were tested in the MACS environment. Results collected from the numerical data show acceptably minor differences, and, together with the subjective controller questionnaires showing a trend towards preferring STARS, validate the ATD-1/STARS integration.
The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...
Beshears, David L.; Batsell, Stephen G.; Abercrombie, Robert K.; Scudiere, Matthew B.; White, Clifford P.
2007-12-04
An asset identification and information infrastructure management (AI3M) device having an automated identification technology system (AIT), a Transportation Coordinators' Automated Information for Movements System II (TC-AIMS II), a weigh-in-motion system (WIM-II), and an Automated Air Load Planning system (AALPS) all in electronic communication for measuring and calculating actual asset characteristics, either statically or in-motion, and further calculating an actual load plan.
SHARP: Spacecraft Health Automated Reasoning Prototype
NASA Technical Reports Server (NTRS)
Atkinson, David J.
1991-01-01
The planetary spacecraft mission OPS as applied to SHARP is studied. Knowledge systems involved in this study are detailed. SHARP development task and Voyager telecom link analysis were examined. It was concluded that artificial intelligence has a proven capability to deliver useful functions in a real time space flight operations environment. SHARP has precipitated major change in acceptance of automation at JPL. The potential payoff from automation using AI is substantial. SHARP, and other AI technology is being transferred into systems in development including mission operations automation, science data systems, and infrastructure applications.
NASA Astrophysics Data System (ADS)
Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.
2016-03-01
Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.
Use of ICT in College Libraries in Karnataka, India: A Survey
ERIC Educational Resources Information Center
Kumar, B. T. Sampath; Biradar, B. S.
2010-01-01
Purpose; The purpose of this paper is to examine the use of information communication technology (ICT) in 31 college libraries in Karnataka, India by investigating the ICT infrastructure, current status of library automation, barriers to implementation of library automation and also librarians' attitudes towards the use of ICT.…
High-reliability computing for the smarter planet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Graham, Paul; Manuzzato, Andrea
2010-01-01
The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less
Massachusetts Institute of Technology Consortium Agreement
1999-03-01
In this, our second progress report of the Phase Two Home Automation and Healthcare Consortium at the Brit and Alex d’Arbeloff Laboratory for...Covered here are the diverse fields of home automation and healthcare research, ranging from human modeling, patient monitoring, and diagnosis to new...sensors and actuators, physical aids, human-machine interface and home automation infrastructure. These results will be presented at the upcoming General Assembly of the Consortium held on October 27-October 30, 1998 at MIT.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.
2015-01-01
The basic vision of AdvoCATE is to automate the creation, manipulation, and management of large-scale assurance cases based on a formal theory of argument structures. Its main purposes are for creating and manipulating argument structures for safety assurance cases using the Goal Structuring Notation (GSN), and as a test bed and proof-of-concept for the formal theory of argument structures. AdvoCATE is available for Windows 7, Macintosh OSX, and Linux. Eventually, AdvoCATE will serve as a dashboard for safety related information and provide an infrastructure for safety decisions and management.
2011-03-31
evidence based medicine into clinical practice. It will decrease costs and enable multiple stakeholders to work in an open content/source environment to exchange clinical content, develop and test technology and explore processes in applied CDS. Design: Comparative study between the KMR infrastructure and capabilities developed as an open source, vendor agnostic solution for aCPG execution within AHLTA and the current DoD/MHS standard evaluating: H1: An open source, open standard KMR and Clinical Decision Support Engine can enable organizations to share domain
Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure.
Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin
2018-03-29
Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures.
Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure
Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin
2018-01-01
Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures. PMID:29596332
A Drupal-Based Collaborative Framework for Science Workflows
NASA Astrophysics Data System (ADS)
Pinheiro da Silva, P.; Gandara, A.
2010-12-01
Cyber-infrastructure is built from utilizing technical infrastructure to support organizational practices and social norms to provide support for scientific teams working together or dependent on each other to conduct scientific research. Such cyber-infrastructure enables the sharing of information and data so that scientists can leverage knowledge and expertise through automation. Scientific workflow systems have been used to build automated scientific systems used by scientists to conduct scientific research and, as a result, create artifacts in support of scientific discoveries. These complex systems are often developed by teams of scientists who are located in different places, e.g., scientists working in distinct buildings, and sometimes in different time zones, e.g., scientist working in distinct national laboratories. The sharing of these specifications is currently supported by the use of version control systems such as CVS or Subversion. Discussions about the design, improvement, and testing of these specifications, however, often happen elsewhere, e.g., through the exchange of email messages and IM chatting. Carrying on a discussion about these specifications is challenging because comments and specifications are not necessarily connected. For instance, the person reading a comment about a given workflow specification may not be able to see the workflow and even if the person can see the workflow, the person may not specifically know to which part of the workflow a given comments applies to. In this paper, we discuss the design, implementation and use of CI-Server, a Drupal-based infrastructure, to support the collaboration of both local and distributed teams of scientists using scientific workflows. CI-Server has three primary goals: to enable information sharing by providing tools that scientists can use within their scientific research to process data, publish and share artifacts; to build community by providing tools that support discussions between scientists about artifacts used or created through scientific processes; and to leverage the knowledge collected within the artifacts and scientific collaborations to support scientific discoveries.
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Murphy, Patrick C.; Bacon, Barton J.; Gregory, Irene M.; Grauer, Jared A.; Busan, Ronald C.; Croom, Mark A.
2014-01-01
Control of complex Vertical Take-Off and Landing (VTOL) aircraft traversing from hovering to wing born flight mode and back poses notoriously difficult modeling, simulation, control, and flight-testing challenges. This paper provides an overview of the techniques and advances required to develop the GL-10 tilt-wing, tilt-tail, long endurance, VTOL aircraft control system. The GL-10 prototype's unusual and complex configuration requires application of state-of-the-art techniques and some significant advances in wind tunnel infrastructure automation, efficient Design Of Experiments (DOE) tunnel test techniques, modeling, multi-body equations of motion, multi-body actuator models, simulation, control algorithm design, and flight test avionics, testing, and analysis. The following compendium surveys key disciplines required to develop an effective control system for this challenging vehicle in this on-going effort.
Infrastructure-Free Mapping and Localization for Tunnel-Based Rail Applications Using 2D Lidar
NASA Astrophysics Data System (ADS)
Daoust, Tyler
This thesis presents an infrastructure-free mapping and localization framework for rail vehicles using only a lidar sensor. The method was designed to handle modern underground tunnels: narrow, parallel, and relatively smooth concrete walls. A sliding-window algorithm was developed to estimate the train's motion, using a Renyi's Quadratic Entropy (RQE)-based point-cloud alignment system. The method was tested with datasets gathered on a subway train travelling at high speeds, with 75 km of data across 14 runs, simulating 500 km of localization. The system was capable of mapping with an average error of less than 0.6 % by distance. It was capable of continuously localizing, relative to the map, to within 10 cm in stations and at crossovers, and 2.3 m in pathological sections of tunnel. This work has the potential to improve train localization in a tunnel, which can be used to increase capacity and for automation purposes.
Modernization of B-2 Data, Video, and Control Systems Infrastructure
NASA Technical Reports Server (NTRS)
Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.
Modernization of B-2 Data, Video, and Control Systems Infrastructure
NASA Technical Reports Server (NTRS)
Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.
The Advanced Technology Development Center (ATDC)
NASA Technical Reports Server (NTRS)
Clements, G. R.; Willcoxon, R. (Technical Monitor)
2001-01-01
NASA is building the Advanced Technology Development Center (ATDC) to provide a 'national resource' for the research, development, demonstration, testing, and qualification of Spaceport and Range Technologies. The ATDC will be located at Space Launch Complex 20 (SLC-20) at Cape Canaveral Air Force Station (CCAFS) in Florida. SLC-20 currently provides a processing and launch capability for small-scale rockets; this capability will be augmented with additional ATDC facilities to provide a comprehensive and integrated in situ environment. Examples of Spaceport Technologies that will be supported by ATDC infrastructure include densified cryogenic systems, intelligent automated umbilicals, integrated vehicle health management systems, next-generation safety systems, and advanced range systems. The ATDC can be thought of as a prototype spaceport where industry, government, and academia, in partnership, can work together to improve safety of future space initiatives. The ATDC is being deployed in five separate phases. Major ATDC facilities will include a Liquid Oxygen Area; a Liquid Hydrogen Area, a Liquid Nitrogen Area, and a multipurpose Launch Mount; 'Iron Rocket' Test Demonstrator; a Processing Facility with a Checkout and Control System; and Future Infrastructure Developments. Initial ATDC development will be completed in 2006.
Cost, Energy, and Environmental Impact of Automated Electric Taxi Fleets in Manhattan.
Bauer, Gordon S; Greenblatt, Jeffery B; Gerke, Brian F
2018-04-17
Shared automated electric vehicles (SAEVs) hold great promise for improving transportation access in urban centers while drastically reducing transportation-related energy consumption and air pollution. Using taxi-trip data from New York City, we develop an agent-based model to predict the battery range and charging infrastructure requirements of a fleet of SAEVs operating on Manhattan Island. We also develop a model to estimate the cost and environmental impact of providing service and perform extensive sensitivity analysis to test the robustness of our predictions. We estimate that costs will be lowest with a battery range of 50-90 mi, with either 66 chargers per square mile, rated at 11 kW or 44 chargers per square mile, rated at 22 kW. We estimate that the cost of service provided by such an SAEV fleet will be $0.29-$0.61 per revenue mile, an order of magnitude lower than the cost of service of present-day Manhattan taxis and $0.05-$0.08/mi lower than that of an automated fleet composed of any currently available hybrid or internal combustion engine vehicle (ICEV). We estimate that such an SAEV fleet drawing power from the current NYC power grid would reduce GHG emissions by 73% and energy consumption by 58% compared to an automated fleet of ICEVs.
Urban underground infrastructure mapping and assessment
NASA Astrophysics Data System (ADS)
Huston, Dryver; Xia, Tian; Zhang, Yu; Fan, Taian; Orfeo, Dan; Razinger, Jonathan
2017-04-01
This paper outlines and discusses a few associated details of a smart cities approach to the mapping and condition assessment of urban underground infrastructure. Underground utilities are critical infrastructure for all modern cities. They carry drinking water, storm water, sewage, natural gas, electric power, telecommunications, steam, etc. In most cities, the underground infrastructure reflects the growth and history of the city. Many components are aging, in unknown locations with congested configurations, and in unknown condition. The technique uses sensing and information technology to determine the state of infrastructure and provide it in an appropriate, timely and secure format for managers, planners and users. The sensors include ground penetrating radar and buried sensors for persistent sensing of localized conditions. Signal processing and pattern recognition techniques convert the data in information-laden databases for use in analytics, graphical presentations, metering and planning. The presented data are from construction of the St. Paul St. CCTA Bus Station Project in Burlington, VT; utility replacement sites in Winooski, VT; and laboratory tests of smart phone position registration and magnetic signaling. The soil conditions encountered are favorable for GPR sensing and make it possible to locate buried pipes and soil layers. The present state of the art is that the data collection and processing procedures are manual and somewhat tedious, but that solutions for automating these procedures appear to be viable. Magnetic signaling with moving permanent magnets has the potential for sending lowfrequency telemetry signals through soils that are largely impenetrable by other electromagnetic waves.
Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-01-01
Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313
Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-06-01
Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.
A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models
2013-10-01
which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of
Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.
Analysis of Malicious Traffic in Modbus/TCP Communications
NASA Astrophysics Data System (ADS)
Kobayashi, Tiago H.; Batista, Aguinaldo B.; Medeiros, João Paulo S.; Filho, José Macedo F.; Brito, Agostinho M.; Pires, Paulo S. Motta
This paper presents the results of our analysis about the influence of Information Technology (IT) malicious traffic on an IP-based automation environment. We utilized a traffic generator, called MACE (Malicious trAffic Composition Environment), to inject malicious traffic in a Modbus/TCP communication system and a sniffer to capture and analyze network traffic. The realized tests show that malicious traffic represents a serious risk to critical information infrastructures. We show that this kind of traffic can increase latency of Modbus/TCP communication and that, in some cases, can put Modbus/TCP devices out of communication.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Integrating Emerging Data Sources Into Operational Practice
DOT National Transportation Integrated Search
2018-05-15
Agencies have the potential to collect, use, and share data from connected and automated vehicles (CAV), connected travelers, and connected infrastructure elements to improve the performance of their traffic management systems and traffic management ...
Vehicle automation and weather : challenges and opportunities.
DOT National Transportation Integrated Search
2016-12-25
Adverse weather has major impacts on the safety and operations of all roads, from signalized arterials to Interstate highways. Weather affects driver behavior, vehicle performance, pavement friction, and roadway infrastructure, thereby increasing the...
Biswas, Amitava; Liu, Chen; Monga, Inder; ...
2016-01-01
For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.
Automated transit infrastructure maintenance demonstration.
DOT National Transportation Integrated Search
2009-04-01
The report was prepared by Bentley Systems, Inc. (Bentley) in the course of performing work contracted : for and sponsored by the New York State Energy Research and Development Authority (NYSERDA), the : New York State Department of Transportation (N...
Automated frame selection process for high-resolution microendoscopy
NASA Astrophysics Data System (ADS)
Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-04-01
We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.
Simulation to Support Local Search in Trajectory Optimization Planning
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Venable, K. Brent; Lindsey, James
2012-01-01
NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.
NASA Astrophysics Data System (ADS)
Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele
2017-09-01
The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.
Data quality can make or break a research infrastructure
NASA Astrophysics Data System (ADS)
Pastorello, G.; Gunter, D.; Chu, H.; Christianson, D. S.; Trotta, C.; Canfora, E.; Faybishenko, B.; Cheah, Y. W.; Beekwilder, N.; Chan, S.; Dengel, S.; Keenan, T. F.; O'Brien, F.; Elbashandy, A.; Poindexter, C.; Humphrey, M.; Papale, D.; Agarwal, D.
2017-12-01
Research infrastructures (RIs) commonly support observational data provided by multiple, independent sources. Uniformity in the data distributed by such RIs is important in most applications, e.g., in comparative studies using data from two or more sources. Achieving uniformity in terms of data quality is challenging, especially considering that many data issues are unpredictable and cannot be detected until a first occurrence of the issue. With that, many data quality control activities within RIs require a manual, human-in-the-loop element, making it an expensive activity. Our motivating example is the FLUXNET2015 dataset - a collection of ecosystem-level carbon, water, and energy fluxes between land and atmosphere from over 200 sites around the world, some sites with over 20 years of data. About 90% of the human effort to create the dataset was spent in data quality related activities. Based on this experience, we have been working on solutions to increase the automation of data quality control procedures. Since it is nearly impossible to fully automate all quality related checks, we have been drawing from the experience with techniques used in software development, which shares a few common constraints. In both managing scientific data and writing software, human time is a precious resource; code bases, as Science datasets, can be large, complex, and full of errors; both scientific and software endeavors can be pursued by individuals, but collaborative teams can accomplish a lot more. The lucrative and fast-paced nature of the software industry fueled the creation of methods and tools to increase automation and productivity within these constraints. Issue tracking systems, methods for translating problems into automated tests, powerful version control tools are a few examples. Terrestrial and aquatic ecosystems research relies heavily on many types of observational data. As volumes of data collection increases, ensuring data quality is becoming an unwieldy challenge for RIs. Business as usual approaches to data quality do not work with larger data volumes. We believe RIs can benefit greatly from adapting and imitating this body of theory and practice from software quality into data quality, enabling systematic and reproducible safeguards against errors and mistakes in datasets as much as in software.
Robot-Powered Reliability Testing at NREL's ESIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Kevin
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested-and currently costly-component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle-all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle'smore » onboard storage tank.« less
Robot-Powered Reliability Testing at NREL's ESIF
Harrison, Kevin
2018-02-14
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested-and currently costly-component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle-all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle's onboard storage tank.
Robot-Powered Reliability Testing at NREL's ESIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Kevin
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested—and currently costly—component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle—all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle'smore » onboard storage tank.« less
An automated system for rail transit infrastructure inspection.
DOT National Transportation Integrated Search
2015-03-01
This project applied commercial remote sensing and spatial information (CRS&SI) : technologies such as Ground Penetrating Radar (GPR), laser, GIS, and GPS to passenger rail : inspections. An integrated rail inspection system that can be mounted on hi...
University of Florida Advanced Technologies Campus Testbed [Summary
DOT National Transportation Integrated Search
2017-12-01
Connected vehicles (CV) and automated vehicles (AV) are the subjects of numerous projects around the world. CVs can communicate with a driver, other vehicles, roadside infrastructure, the Internet, or all of the above. These communications can assist...
Lunar Contour Crafting: A Novel Technique for ISRU-Based Habitat Development
NASA Technical Reports Server (NTRS)
Khoshnevis, Behrokh; Bodiford, Melanie P.; Burks, Kevin H.; Ethridge, Ed; Tucker, Dennis; Kim, Won; Toutanji, Houssam; Fiske, Michael R.
2004-01-01
As the nation prepares to return to the Moon, it is apparent that the viability of long duration visits with appropriate radiation shielding/crew protection, hinges on the development of Lunar structures, preferably in advance of a manned landing, and preferably utilizing in-situ resources. Contour Crafting is a USC-patented technique for automated development of terrestrial concrete-based structures. The process is relatively fast, completely automated, and supports the incorporation of various infrastructure elements such as plumbing and electrical wiring. This paper will present a conceptual design of a Lunar Contour Crafting system designed to autonomously fabricate integrated structures on the Lunar surface using high-strength concrete based on Lunar regolith, including glass reinforcement rods or fibers fabricated from melted regolith. Design concepts will be presented, as well as results of initial tests aimed at concrete and glass production using Lunar regolith simulant. Key issues and concerns will be presented, along with design concepts for an LCC testbed to be developed at MSFC's Prototype Development Laboratory (PDL).
An expert system for simulating electric loads aboard Space Station Freedom
NASA Technical Reports Server (NTRS)
Kukich, George; Dolce, James L.
1990-01-01
Space Station Freedom will provide an infrastructure for space experimentation. This environment will feature regulated access to any resources required by an experiment. Automated systems are being developed to manage the electric power so that researchers can have the flexibility to modify their experiment plan for contingencies or for new opportunities. To define these flexible power management characteristics for Space Station Freedom, a simulation is required that captures the dynamic nature of space experimentation; namely, an investigator is allowed to restructure his experiment and to modify its execution. This changes the energy demands for the investigator's range of options. An expert system competent in the domain of cryogenic fluid management experimentation was developed. It will be used to help design and test automated power scheduling software for Freedom's electric power system. The expert system allows experiment planning and experiment simulation. The former evaluates experimental alternatives and offers advice on the details of the experiment's design. The latter provides a real-time simulation of the experiment replete with appropriate resource consumption.
Tackling the x-ray cargo inspection challenge using machine learning
NASA Astrophysics Data System (ADS)
Jaccard, Nicolas; Rogers, Thomas W.; Morton, Edward J.; Griffin, Lewis D.
2016-05-01
The current infrastructure for non-intrusive inspection of cargo containers cannot accommodate exploding com-merce volumes and increasingly stringent regulations. There is a pressing need to develop methods to automate parts of the inspection workflow, enabling expert operators to focus on a manageable number of high-risk images. To tackle this challenge, we developed a modular framework for automated X-ray cargo image inspection. Employing state-of-the-art machine learning approaches, including deep learning, we demonstrate high performance for empty container verification and specific threat detection. This work constitutes a significant step towards the partial automation of X-ray cargo image inspection.
Schneidereit, Dominik; Kraus, Larissa; Meier, Jochen C; Friedrich, Oliver; Gilbert, Daniel F
2017-06-15
High-content screening microscopy relies on automation infrastructure that is typically proprietary, non-customizable, costly and requires a high level of skill to use and maintain. The increasing availability of rapid prototyping technology makes it possible to quickly engineer alternatives to conventional automation infrastructure that are low-cost and user-friendly. Here, we describe a 3D printed inexpensive open source and scalable motorized positioning stage for automated high-content screening microscopy and provide detailed step-by-step instructions to re-building the device, including a comprehensive parts list, 3D design files in STEP (Standard for the Exchange of Product model data) and STL (Standard Tessellation Language) format, electronic circuits and wiring diagrams as well as software code. System assembly including 3D printing requires approx. 30h. The fully assembled device is light-weight (1.1kg), small (33×20×8cm) and extremely low-cost (approx. EUR 250). We describe positioning characteristics of the stage, including spatial resolution, accuracy and repeatability, compare imaging data generated with our device to data obtained using a commercially available microplate reader, demonstrate its suitability to high-content microscopy in 96-well high-throughput screening format and validate its applicability to automated functional Cl - - and Ca 2+ -imaging with recombinant HEK293 cells as a model system. A time-lapse video of the stage during operation and as part of a custom assembled screening robot can be found at https://vimeo.com/158813199. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Francisco, Glen; Brown, Todd
2012-06-01
Integrated security systems are essential to pre-empting criminal assaults. Nearly 500,000 sites have been identified (source: US DHS) as critical infrastructure sites that would suffer severe damage if a security breach should occur. One major breach in any of 123 U.S. facilities, identified as "most critical", threatens more than 1,000,000 people. The vulnerabilities of critical infrastructure are expected to continue and even heighten over the coming years.
The ORAC-DR data reduction pipeline
NASA Astrophysics Data System (ADS)
Cavanagh, B.; Jenness, T.; Economou, F.; Currie, M. J.
2008-03-01
The ORAC-DR data reduction pipeline has been used by the Joint Astronomy Centre since 1998. Originally developed for an infrared spectrometer and a submillimetre bolometer array, it has since expanded to support twenty instruments from nine different telescopes. By using shared code and a common infrastructure, rapid development of an automated data reduction pipeline for nearly any astronomical data is possible. This paper discusses the infrastructure available to developers and estimates the development timescales expected to reduce data for new instruments using ORAC-DR.
Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W.; Ramey, John; Davis, Mark M.; Kalams, Spyros A.; De Rosa, Stephen C.; Gottardo, Raphael
2014-01-01
Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment. PMID:25167361
Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W; Ramey, John; Davis, Mark M; Kalams, Spyros A; De Rosa, Stephen C; Gottardo, Raphael
2014-08-01
Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment.
Using AberOWL for fast and scalable reasoning over BioPortal ontologies.
Slater, Luke; Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert
2016-08-08
Reasoning over biomedical ontologies using their OWL semantics has traditionally been a challenging task due to the high theoretical complexity of OWL-based automated reasoning. As a consequence, ontology repositories, as well as most other tools utilizing ontologies, either provide access to ontologies without use of automated reasoning, or limit the number of ontologies for which automated reasoning-based access is provided. We apply the AberOWL infrastructure to provide automated reasoning-based access to all accessible and consistent ontologies in BioPortal (368 ontologies). We perform an extensive performance evaluation to determine query times, both for queries of different complexity and for queries that are performed in parallel over the ontologies. We demonstrate that, with the exception of a few ontologies, even complex and parallel queries can now be answered in milliseconds, therefore allowing automated reasoning to be used on a large scale, to run in parallel, and with rapid response times.
Building an intellectual infrastructure for space commerce
NASA Technical Reports Server (NTRS)
Stone, Barbara A.; Struthers, Jeffrey L.
1992-01-01
Competition in commerce requires an 'intellectual infrastructure', that is, a work force with extensive scientific and technical knowledge and a thorough understanding of the business world. This paper focuses on the development of such intellectual infrastructure for space commerce. Special consideration is given to the contributions to this development by the 17 Centers for the Commercial Development of Space Program conducting commercially oriented research in eight specialized areas: automation and robotics, remote sensing, life sciences, materials processing in space, space power, space propulsion, space structures and materials, and advanced satellite communications. Attention is also given to the Space Business Development Center concept aimed at addressing a variety of barriers common to the development of space commerce.
Automated Mapping of Flood Events in the Mississippi River Basin Utilizing NASA Earth Observations
NASA Technical Reports Server (NTRS)
Bartkovich, Mercedes; Baldwin-Zook, Helen Blue; Cruz, Dashiell; McVey, Nicholas; Ploetz, Chris; Callaway, Olivia
2017-01-01
The Mississippi River Basin is the fourth largest drainage basin in the world, and is susceptible to multi-level flood events caused by heavy precipitation, snow melt, and changes in water table levels. Conducting flood analysis during periods of disaster is a challenging endeavor for NASA's Short-term Prediction Research and Transition Center (SPoRT), Federal Emergency Management Agency (FEMA), and the U.S. Geological Survey's Hazards Data Distribution Systems (USGS HDDS) due to heavily-involved research and lack of manpower. During this project, an automated script was generated that performs high-level flood analysis to relieve the workload for end-users. The script incorporated Landsat 8 Operational Land Imager (OLI) tiles and utilized computer-learning techniques to generate accurate water extent maps. The script referenced the Moderate Resolution Imaging Spectroradiometer (MODIS) land-water mask to isolate areas of flood induced waters. These areas were overlaid onto the National Land Cover Database's (NLCD) land cover data, the Oak Ridge National Laboratory's LandScan data, and Homeland Infrastructure Foundation-Level Data (HIFLD) to determine the classification of areas impacted and the population density affected by flooding. The automated algorithm was initially tested on the September 2016 flood event that occurred in Upper Mississippi River Basin, and was then further tested on multiple flood events within the Mississippi River Basin. This script allows end users to create their own flood probability and impact maps for disaster mitigation and recovery efforts.
MSFC Three Point Docking Mechanism design review
NASA Technical Reports Server (NTRS)
Schaefer, Otto; Ambrosio, Anthony
1992-01-01
In the next few decades, we will be launching expensive satellites and space platforms that will require recovery for economic reasons, because of initial malfunction, servicing, repairs, or out of a concern for post lifetime debris removal. The planned availability of a Three Point Docking Mechanism (TPDM) is a positive step towards an operational satellite retrieval infrastructure. This study effort supports NASA/MSFC engineering work in developing an automated docking capability. The work was performed by the Grumman Space & Electronics Group as a concept evaluation/test for the Tumbling Satellite Retrieval Kit. Simulation of a TPDM capture was performed in Grumman's Large Amplitude Space Simulator (LASS) using mockups of both parts (the mechanism and payload). Similar TPDM simulation activities and more extensive hardware testing was performed at NASA/MSFC in the Flight Robotics Laboratory and Space Station/Space Operations Mechanism Test Bed (6-DOF Facility).
NASA Leads Demo for Drone Traffic Management Tech
2017-06-30
During the latest NASA-led demonstrations of technologies that could be part of an automated traffic management system for drones, pilots sent their vehicles beyond visual line-of-sight in simulated infrastructure inspections, search and rescue support, and package delivery.
DOT National Transportation Integrated Search
2017-11-23
The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...
Collecting Network-wide Bicycle and Pedestrian Data: A Guidebook for When and Where to Count
DOT National Transportation Integrated Search
2017-09-01
Across the United States, jurisdictions are investing more in bicycle and pedestrian infrastructure, which requires non-motorized traffic volume data. While some agencies use automated counters to collect continuous and short duration counts, the mos...
Integrating Automation into a Multi-Mission Operations Center
NASA Technical Reports Server (NTRS)
Surka, Derek M.; Jones, Lori; Crouse, Patrick; Cary, Everett A, Jr.; Esposito, Timothy C.
2007-01-01
NASA Goddard Space Flight Center's Space Science Mission Operations (SSMO) Project is currently tackling the challenge of minimizing ground operations costs for multiple satellites that have surpassed their prime mission phase and are well into extended mission. These missions are being reengineered into a multi-mission operations center built around modern information technologies and a common ground system infrastructure. The effort began with the integration of four SMEX missions into a similar architecture that provides command and control capabilities and demonstrates fleet automation and control concepts as a pathfinder for additional mission integrations. The reengineered ground system, called the Multi-Mission Operations Center (MMOC), is now undergoing a transformation to support other SSMO missions, which include SOHO, Wind, and ACE. This paper presents the automation principles and lessons learned to date for integrating automation into an existing operations environment for multiple satellites.
Improving Grid Resilience through Informed Decision-making (IGRID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Laurie; Stamber, Kevin L.; Jeffers, Robert Fredric
The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for themore » foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.« less
Robot-Powered Reliability Testing at NREL's ESIF
Harrison, Kevin
2018-02-14
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untestedâand currently costlyâcomponent of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicleâall under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle's onboard storage tank.
NASA Technical Reports Server (NTRS)
Calvert, John; Freas, George, II
2017-01-01
The RAPTR was developed to test ISS payloads for NASA. RAPTR is a simulation of the Command and Data Handling (C&DH) interfaces of the ISS (MIL-STD 1553B, Ethernet and TAXI) and is designed to facilitate rapid testing and deployment of payload experiments to the ISS. The ISS Program's goal is to reduce the amount of time it takes a payload developer to build, test and fly a payload, including payload software. The RAPTR meets this need with its user oriented, visually rich interface. Additionally, the Analog and Discrete (A&D) signals of the following payload types may be tested with RAPTR: (1) EXPRESS Sub Rack Payloads; (2) ELC payloads; (3) External Columbus payloads; (4) External Japanese Experiment Module (JEM) payloads. The automated payload configuration setup and payload data inspection infrastructure is found nowhere else in ISS payload test systems. Testing can be done with minimal human intervention and setup, as the RAPTR automatically monitors parameters in the data headers that are sent to, and come from the experiment under test.
A framework to support human factors of automation in railway intelligent infrastructure.
Dadashi, Nastaran; Wilson, John R; Golightly, David; Sharples, Sarah
2014-01-01
Technological and organisational advances have increased the potential for remote access and proactive monitoring of the infrastructure in various domains and sectors - water and sewage, oil and gas and transport. Intelligent Infrastructure (II) is an architecture that potentially enables the generation of timely and relevant information about the state of any type of infrastructure asset, providing a basis for reliable decision-making. This paper reports an exploratory study to understand the concepts and human factors associated with II in the railway, largely drawing from structured interviews with key industry decision-makers and attachment to pilot projects. Outputs from the study include a data-processing framework defining the key human factors at different levels of the data structure within a railway II system and a system-level representation. The framework and other study findings will form a basis for human factors contributions to systems design elements such as information interfaces and role specifications.
Hoffman, P; Kline, E; George, L; Price, K; Clark, M; Walasin, R
1995-01-01
The Military Health Service System (MHSS) provides health care for the Department of Defense (DOD). This system operates on an annual budget of $15 Billion, supports 127 medical treatment facilities (MTFs) and 500 clinics, and provides support to 8.7 million beneficiaries worldwide. To support these facilities and their patients, the MHSS uses more than 125 different networked automated medical systems. These systems rely on a heterogeneous telecommunications infrastructure for data communications. With the support of the Defense Medical Information Management (DMIM) Program Office, our goal was to identify the network requirements for DMIM migration and target systems and design a communications infrastructure to support all systems with an integrated network. This work used tools from Business Process Reengineering (BPR) and applied it to communications infrastructure design for the first time. The methodology and results are applicable to any health care enterprise, military or civilian.
Hoffman, P.; Kline, E.; George, L.; Price, K.; Clark, M.; Walasin, R.
1995-01-01
The Military Health Service System (MHSS) provides health care for the Department of Defense (DOD). This system operates on an annual budget of $15 Billion, supports 127 medical treatment facilities (MTFs) and 500 clinics, and provides support to 8.7 million beneficiaries worldwide. To support these facilities and their patients, the MHSS uses more than 125 different networked automated medical systems. These systems rely on a heterogeneous telecommunications infrastructure for data communications. With the support of the Defense Medical Information Management (DMIM) Program Office, our goal was to identify the network requirements for DMIM migration and target systems and design a communications infrastructure to support all systems with an integrated network. This work used tools from Business Process Reengineering (BPR) and applied it to communications infrastructure design for the first time. The methodology and results are applicable to any health care enterprise, military or civilian. PMID:8563346
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Implications of Responsive Space on the Flight Software Architecture
NASA Technical Reports Server (NTRS)
Wilmot, Jonathan
2006-01-01
The Responsive Space initiative has several implications for flight software that need to be addressed not only within the run-time element, but the development infrastructure and software life-cycle process elements as well. The runtime element must at a minimum support Plug & Play, while the development and process elements need to incorporate methods to quickly generate the needed documentation, code, tests, and all of the artifacts required of flight quality software. Very rapid response times go even further, and imply little or no new software development, requiring instead, using only predeveloped and certified software modules that can be integrated and tested through automated methods. These elements have typically been addressed individually with significant benefits, but it is when they are combined that they can have the greatest impact to Responsive Space. The Flight Software Branch at NASA's Goddard Space Flight Center has been developing the runtime, infrastructure and process elements needed for rapid integration with the Core Flight software System (CFS) architecture. The CFS architecture consists of three main components; the core Flight Executive (cFE), the component catalog, and the Integrated Development Environment (DE). This paper will discuss the design of the components, how they facilitate rapid integration, and lessons learned as the architecture is utilized for an upcoming spacecraft.
DOT National Transportation Integrated Search
2014-01-01
This issue of Research Showcase highlights the value of roadside vegetation, from stabilizing soil, : which protects infrastructure and provides safe clear zones for errant vehicles, to providing habitat : for wildlife and crop pollinators. Recent FD...
50 CFR 86.115 - How should I administer the survey?
Code of Federal Regulations, 2010 CFR
2010-10-01
... (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM BOATING INFRASTRUCTURE GRANT (BIG... to collect data, which may include telephone, mail, fax, or other inventory means. We do not expect you to use automated, electronic, mechanical, or similar means of information collection. (d) Data...
50 CFR 86.115 - How should I administer the survey?
Code of Federal Regulations, 2011 CFR
2011-10-01
... (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM BOATING INFRASTRUCTURE GRANT (BIG... to collect data, which may include telephone, mail, fax, or other inventory means. We do not expect you to use automated, electronic, mechanical, or similar means of information collection. (d) Data...
50 CFR 86.115 - How should I administer the survey?
Code of Federal Regulations, 2012 CFR
2012-10-01
... (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM BOATING INFRASTRUCTURE GRANT (BIG... to collect data, which may include telephone, mail, fax, or other inventory means. We do not expect you to use automated, electronic, mechanical, or similar means of information collection. (d) Data...
BCube: Building a Geoscience Brokering Framework
NASA Astrophysics Data System (ADS)
Jodha Khalsa, Siri; Nativi, Stefano; Duerr, Ruth; Pearlman, Jay
2014-05-01
BCube is addressing the need for effective and efficient multi-disciplinary collaboration and interoperability through the advancement of brokering technologies. As a prototype "building block" for NSF's EarthCube cyberinfrastructure initiative, BCube is demonstrating how a broker can serve as an intermediary between information systems that implement well-defined interfaces, thereby providing a bridge between communities that employ different specifications. Building on the GEOSS Discover and Access Broker (DAB), BCube will develop new modules and services including: • Expanded semantic brokering capabilities • Business Model support for work flows • Automated metadata generation • Automated linking to services discovered via web crawling • Credential passing for seamless access to data • Ranking of search results from brokered catalogs Because facilitating cross-discipline research involves cultural and well as technical challenges, BCube is also addressing the sociological and educational components of infrastructure development. We are working, initially, with four geoscience disciplines: hydrology, oceans, polar and weather, with an emphasis on connecting existing domain infrastructure elements to facilitate cross-domain communications.
NASA Technical Reports Server (NTRS)
Rilee, Michael Lee; Kuo, Kwo-Sen
2017-01-01
The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.
Windows Terminal Servers Orchestration
NASA Astrophysics Data System (ADS)
Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim
2017-10-01
Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.
van Soest, Johan; Sun, Chang; Mussmann, Ole; Puts, Marco; van den Berg, Bob; Malic, Alexander; van Oppen, Claudia; Towend, David; Dekker, Andre; Dumontier, Michel
2018-01-01
Conventional data mining algorithms are unable to satisfy the current requirements on analyzing big data in some fields such as medicine, policy making, judicial, and tax records. However, applying diverse datasets from different institutes (both healthcare and non-healthcare related) can enrich information and insights. So far, analyzing this data in an automated, privacy-preserving manner does not exist to our knowledge. In this work, we propose an infrastructure, and proof-of-concept for privacy-preserving analytics on vertically partitioned data.
The Virtual Mission Operations Center
NASA Technical Reports Server (NTRS)
Moore, Mike; Fox, Jeffrey
1994-01-01
Spacecraft management is becoming more human intensive as spacecraft become more complex and as operations costs are growing accordingly. Several automation approaches have been proposed to lower these costs. However, most of these approaches are not flexible enough in the operations processes and levels of automation that they support. This paper presents a concept called the Virtual Mission Operations Center (VMOC) that provides highly flexible support for dynamic spacecraft management processes and automation. In a VMOC, operations personnel can be shared among missions, the operations team can change personnel and their locations, and automation can be added and removed as appropriate. The VMOC employs a form of on-demand supervisory control called management by exception to free operators from having to actively monitor their system. The VMOC extends management by exception, however, so that distributed, dynamic teams can work together. The VMOC uses work-group computing concepts and groupware tools to provide a team infrastructure, and it employs user agents to allow operators to define and control system automation.
Clarity: An Open Source Manager for Laboratory Automation
Delaney, Nigel F.; Echenique, José Rojas; Marx, Christopher J.
2013-01-01
Software to manage automated laboratories interfaces with hardware instruments, gives users a way to specify experimental protocols, and schedules activities to avoid hardware conflicts. In addition to these basics, modern laboratories need software that can run multiple different protocols in parallel and that can be easily extended to interface with a constantly growing diversity of techniques and instruments. We present Clarity: a laboratory automation manager that is hardware agnostic, portable, extensible and open source. Clarity provides critical features including remote monitoring, robust error reporting by phone or email, and full state recovery in the event of a system crash. We discuss the basic organization of Clarity; demonstrate an example of its implementation for the automated analysis of bacterial growth; and describe how the program can be extended to manage new hardware. Clarity is mature; well documented; actively developed; written in C# for the Common Language Infrastructure; and is free and open source software. These advantages set Clarity apart from currently available laboratory automation programs. PMID:23032169
Agile based "Semi-"Automated Data ingest process : ORNL DAAC example
NASA Astrophysics Data System (ADS)
Santhana Vannan, S. K.; Beaty, T.; Cook, R. B.; Devarakonda, R.; Hook, L.; Wei, Y.; Wright, D.
2015-12-01
The ORNL DAAC archives and publishes data and information relevant to biogeochemical, ecological, and environmental processes. The data archived at the ORNL DAAC must be well formatted, self-descriptive, and documented, as well as referenced in a peer-reviewed publication. The ORNL DAAC ingest team curates diverse data sets from multiple data providers simultaneously. To streamline the ingest process, the data set submission process at the ORNL DAAC has been recently updated to use an agile process and a semi-automated workflow system has been developed to provide a consistent data provider experience and to create a uniform data product. The goals of semi-automated agile ingest process are to: 1.Provide the ability to track a data set from acceptance to publication 2. Automate steps that can be automated to improve efficiencies and reduce redundancy 3.Update legacy ingest infrastructure 4.Provide a centralized system to manage the various aspects of ingest. This talk will cover the agile methodology, workflow, and tools developed through this system.
[Research applications in digital radiology. Big data and co].
Müller, H; Hanbury, A
2016-02-01
Medical imaging produces increasingly complex images (e.g. thinner slices and higher resolution) with more protocols, so that image reading has also become much more complex. More information needs to be processed and usually the number of radiologists available for these tasks has not increased to the same extent. The objective of this article is to present current research results from projects on the use of image data for clinical decision support. An infrastructure that can allow large volumes of data to be accessed is presented. In this way the best performing tools can be identified without the medical data having to leave secure servers. The text presents the results of the VISCERAL and Khresmoi EU-funded projects, which allow the analysis of previous cases from institutional archives to support decision-making and for process automation. The results also represent a secure evaluation environment for medical image analysis. This allows the use of data extracted from past cases to solve information needs occurring when diagnosing new cases. The presented research prototypes allow direct extraction of knowledge from the visual data of the images and to use this for decision support or process automation. Real clinical use has not been tested but several subjective user tests showed the effectiveness and efficiency of the process. The future in radiology will clearly depend on better use of the important knowledge in clinical image archives to automate processes and aid decision-making via big data analysis. This can help concentrate the work of radiologists towards the most important parts of diagnostics.
Automated Design of Noise-Minimal, Safe Rotorcraft Trajectories
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Venable, K. Brent; Lindsay, James
2012-01-01
NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and aircraft such as a 40-passenger civil tilt rotors. Rotorcraft have a number of advantages over fixed wing aircraft, primarily in not requiring direct access to the primary fixed wing runways. As such they can operate at an airport without directly interfering with major air carrier and commuter aircraft operations. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. In this paper we propose to address the rotorcraft noise problem by exploiting powerful search techniques coming from artificial intelligence, coupled with simulation and field tests, to design trajectories that are expected to improve on the amount of ground noise generated. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints into the problem formulation that addresses passenger safety and comfort.
NASA Astrophysics Data System (ADS)
Neidhardt, Alexander; Schönberger, Matthias; Plötz, Christian; Kronschnabl, Gerhard
2014-12-01
VGOS is a challenge for all fields of a new radio telescope. For the future software and hardware control mechanisms, it also requires new developments and solutions. More experiments, more data, high-speed data transfers through the Internet, and a real-time monitoring of current system status information must be handled. Additionally, an optimization of the observation shifts is required to reduce work load and costs. Within the framework of the development of the new 13.2-m Twin radio Telescopes Wettzell (TTW) and in combination with upgrades of the 20-m Radio Telescope Wettzell (RTW), some new technical realizations are under development and testing. Besides the activities for the realization of remote control, mainly supported during the project ``Novel EXploration Pushing Robust e-VLBI Services (NEXPReS)'' of the European VLBI Network (EVN), autonomous, automated, and unattended observations are also planned. A basic infrastructure should enable these, e.g., independent monitoring and security systems or additional, local high-speed transfer networks to ship data directly from a telescope to the main control room.
OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS
2011-01-01
Background Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions. Methods A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms. Results Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic peers and found MASCOT to be a dominant peer as judged by peer ranking. Conclusion The simulated and real-world experiments in the present study demonstrated that the OpenKnowledge infrastructure with peer ranking capability can serve as an evaluative environment for automated experimentation. PMID:22192521
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly... Simplified Entry functionality in the Automated Commercial Environment (ACE). Originally, the test was known...) test concerning Automated Commercial Environment (ACE) Simplified Entry (SE test) functionality is...
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
A semi-automated workflow for biodiversity data retrieval, cleaning, and quality control
Mathew, Cherian; Obst, Matthias; Vicario, Saverio; Haines, Robert; Williams, Alan R.; de Jong, Yde; Goble, Carole
2014-01-01
Abstract The compilation and cleaning of data needed for analyses and prediction of species distributions is a time consuming process requiring a solid understanding of data formats and service APIs provided by biodiversity informatics infrastructures. We designed and implemented a Taverna-based Data Refinement Workflow which integrates taxonomic data retrieval, data cleaning, and data selection into a consistent, standards-based, and effective system hiding the complexity of underlying service infrastructures. The workflow can be freely used both locally and through a web-portal which does not require additional software installations by users. PMID:25535486
Device-Enabled Authorization in the Grey System
2005-02-01
proof checker. Journal of Automated Reasoning 31(3-4):231–260, 2003. [7] D. Balfanz , D. Dean, and M. Spreitzer. A security infrastructure for...distributed Java applications. In Proceedings of the 21st IEEE Symposium on Security and Privacy, May 2002. [8] D. Balfanz and E. Felten. Hand-held computers
ERIC Educational Resources Information Center
Zhao, Weiyi
2011-01-01
Wireless mesh networks (WMNs) have recently emerged to be a cost-effective solution to support large-scale wireless Internet access. They have numerous applications, such as broadband Internet access, building automation, and intelligent transportation systems. One research challenge for Internet-based WMNs is to design efficient mobility…
Software for roof defects recognition on aerial photographs
NASA Astrophysics Data System (ADS)
Yudin, D.; Naumov, A.; Dolzhenko, A.; Patrakova, E.
2018-05-01
The article presents information on software for roof defects recognition on aerial photographs, made with air drones. An areal image segmentation mechanism is described. It allows detecting roof defects – unsmoothness that causes water stagnation after rain. It is shown that HSV-transformation approach allows quick detection of stagnation areas, their size and perimeters, but is sensitive to shadows and changes of the roofing-types. Deep Fully Convolutional Network software solution eliminates this drawback. The tested data set consists of the roofing photos with defects and binary masks for them. FCN approach gave acceptable results of image segmentation in Dice metric average value. This software can be used in inspection automation of roof conditions in the production sector and housing and utilities infrastructure.
HiCAT Software Infrastructure: Safe hardware control with object oriented Python
NASA Astrophysics Data System (ADS)
Moriarty, Christopher; Brooks, Keira; Soummer, Remi
2018-01-01
High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.
Klinkenberg-Ramirez, Stephanie; Neri, Pamela M; Volk, Lynn A; Samaha, Sara J; Newmark, Lisa P; Pollard, Stephanie; Varugheese, Matthew; Baxter, Samantha; Aronson, Samuel J; Rehm, Heidi L; Bates, David W
2016-01-01
Partners HealthCare Personalized Medicine developed GeneInsight Clinic (GIC), a tool designed to communicate updated variant information from laboratory geneticists to treating clinicians through automated alerts, categorized by level of variant interpretation change. The study aimed to evaluate feedback from the initial users of the GIC, including the advantages and challenges to receiving this variant information and using this technology at the point of care. Healthcare professionals from two clinics that ordered genetic testing for cardiomyopathy and related disorders were invited to participate in one-hour semi-structured interviews and/ or a one-hour focus group. Using a Grounded Theory approach, transcript concepts were coded and organized into themes. Two genetic counselors and two physicians from two treatment clinics participated in individual interviews. Focus group participants included one genetic counselor and four physicians. Analysis resulted in 8 major themes related to structuring and communicating variant knowledge, GIC's impact on the clinic, and suggestions for improvements. The interview analysis identified longitudinal patient care, family data, and growth in genetic testing content as potential challenges to optimization of the GIC infrastructure. Participants agreed that GIC implementation increased efficiency and effectiveness of the clinic through increased access to genetic variant information at the point of care. Development of information technology (IT) infrastructure to aid in the organization and management of genetic variant knowledge will be critical as the genetic field moves towards whole exome and whole genome sequencing. Findings from this study could be applied to future development of IT support for genetic variant knowledge management that would serve to improve clinicians' ability to manage and care for patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaks, D; Fletcher, R; Salamon, S
Purpose: To develop an online framework that tracks a patient’s plan from initial simulation to treatment and that helps automate elements of the physics plan checks usually performed in the record and verify (RV) system and treatment planning system. Methods: We have developed PlanTracker, an online plan tracking system that automatically imports new patients tasks and follows it through treatment planning, physics checks, therapy check, and chart rounds. A survey was designed to collect information about the amount of time spent by medical physicists in non-physics related tasks. We then assessed these non-physics tasks for automation. Using these surveys, wemore » directed our PlanTracker software development towards the automation of intra-plan physics review. We then conducted a systematic evaluation of PlanTracker’s accuracy by generating test plans in the RV system software designed to mimic real plans, in order to test its efficacy in catching errors both real and theoretical. Results: PlanTracker has proven to be an effective improvement to the clinical workflow in a radiotherapy clinic. We present data indicating that roughly 1/3 of the physics plan check can be automated, and the workflow optimized, and show the functionality of PlanTracker. When the full system is in clinical use we will present data on improvement of time use in comparison to survey data prior to PlanTracker implementation. Conclusion: We have developed a framework for plan tracking and automatic checks in radiation therapy. We anticipate using PlanTracker as a basis for further development in clinical/research software. We hope that by eliminating the most simple and time consuming checks, medical physicists may be able to spend their time on plan quality and other physics tasks rather than in arithmetic and logic checks. We see this development as part of a broader initiative to advance the clinical/research informatics infrastructure surrounding the radiotherapy clinic. This research project has been financially supported by Varian Medical Systems, Palo Alto, CA, through a Varian MRA.« less
NASA Astrophysics Data System (ADS)
Albrecht, F.; Hölbling, D.; Friedl, B.
2017-09-01
Landslide mapping benefits from the ever increasing availability of Earth Observation (EO) data resulting from programmes like the Copernicus Sentinel missions and improved infrastructure for data access. However, there arises the need for improved automated landslide information extraction processes from EO data while the dominant method is still manual delineation. Object-based image analysis (OBIA) provides the means for the fast and efficient extraction of landslide information. To prove its quality, automated results are often compared to manually delineated landslide maps. Although there is awareness of the uncertainties inherent in manual delineations, there is a lack of understanding how they affect the levels of agreement in a direct comparison of OBIA-derived landslide maps and manually derived landslide maps. In order to provide an improved reference, we present a fuzzy approach for the manual delineation of landslides on optical satellite images, thereby making the inherent uncertainties of the delineation explicit. The fuzzy manual delineation and the OBIA classification are compared by accuracy metrics accepted in the remote sensing community. We have tested this approach for high resolution (HR) satellite images of three large landslides in Austria and Italy. We were able to show that the deviation of the OBIA result from the manual delineation can mainly be attributed to the uncertainty inherent in the manual delineation process, a relevant issue for the design of validation processes for OBIA-derived landslide maps.
Management of information in a research and development agency
NASA Technical Reports Server (NTRS)
Keene, Wallace O.
1990-01-01
The NASA program for managing scientific and technical information (STI) is examined, noting the technological, managerial, educational, and legal aspects of transferring and disseminating information. A definition of STI is introduced and NASA's STI-related management programs are outlined. Consideration is given to the role of STI management in NASA mission programs, research efforts supporting the management and use of STI, STI program interfaces, and the Automated Information Management Program to eliminate redundant automation efforts in common administrative functions. The infrastructure needed to manage the broad base of NASA information and the interfaces between NASA's STI management and external organizations are described.
A Method for Evaluating the Safety Impacts of Air Traffic Automation
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Bonesteel, Charles
1998-01-01
This report describes a methodology for analyzing the safety and operational impacts of emerging air traffic technologies. The approach integrates traditional reliability models of the system infrastructure with models that analyze the environment within which the system operates, and models of how the system responds to different scenarios. Products of the analysis include safety measures such as predicted incident rates, predicted accident statistics, and false alarm rates; and operational availability data. The report demonstrates the methodology with an analysis of the operation of the Center-TRACON Automation System at Dallas-Fort Worth International Airport.
The Automation and Exoplanet Orbital Characterization from the Gemini Planet Imager Exoplanet Survey
NASA Astrophysics Data System (ADS)
Jinfei Wang, Jason; Graham, James; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry; Kalas, Paul; arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Ruffio, Jean-Baptiste; Sivaramakrishnan, Anand; Gemini Planet Imager Exoplanet Survey Collaboration
2018-01-01
The Gemini Planet Imager (GPI) Exoplanet Survey (GPIES) is a multi-year 600-star survey to discover and characterize young Jovian exoplanets and their planet forming environments. For large surveys like GPIES, it is critical to have a uniform dataset processed with the latest techniques and calibrations. I will describe the GPI Data Cruncher, an automated data processing framework that is able to generate fully reduced data minutes after the data are taken and can also reprocess the entire campaign in a single day on a supercomputer. The Data Cruncher integrates into a larger automated data processing infrastructure which syncs, logs, and displays the data. I will discuss the benefits of the GPIES data infrastructure, including optimizing observing strategies, finding planets, characterizing instrument performance, and constraining giant planet occurrence. I will also discuss my work in characterizing the exoplanets we have imaged in GPIES through monitoring their orbits. Using advanced data processing algorithms and GPI's precise astrometric calibration, I will show that GPI can achieve one milliarcsecond astrometry on the extensively-studied planet Beta Pic b. With GPI, we can confidently rule out a possible transit of Beta Pic b, but have precise timings on a Hill sphere transit, and I will discuss efforts to search for transiting circumplanetary material this year. I will also discuss the orbital monitoring of other exoplanets as part of GPIES.
Software development infrastructure for the HYBRID modeling and simulation project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the wholemore » problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers involved in the project. Thirdly, to exchange documents quickly, a SharePoint directory has been set-up. SharePoint allows teams and organizations to intelligently share, and collaborate on content from anywhere.« less
Integrated homeland security system with passive thermal imaging and advanced video analytics
NASA Astrophysics Data System (ADS)
Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert
2007-04-01
A complete detection, management, and control security system is absolutely essential to preempting criminal and terrorist assaults on key assets and critical infrastructure. According to Tom Ridge, former Secretary of the US Department of Homeland Security, "Voluntary efforts alone are not sufficient to provide the level of assurance Americans deserve and they must take steps to improve security." Further, it is expected that Congress will mandate private sector investment of over $20 billion in infrastructure protection between 2007 and 2015, which is incremental to funds currently being allocated to key sites by the department of Homeland Security. Nearly 500,000 individual sites have been identified by the US Department of Homeland Security as critical infrastructure sites that would suffer severe and extensive damage if a security breach should occur. In fact, one major breach in any of 7,000 critical infrastructure facilities threatens more than 10,000 people. And one major breach in any of 123 facilities-identified as "most critical" among the 500,000-threatens more than 1,000,000 people. Current visible, nightvision or near infrared imaging technology alone has limited foul-weather viewing capability, poor nighttime performance, and limited nighttime range. And many systems today yield excessive false alarms, are managed by fatigued operators, are unable to manage the voluminous data captured, or lack the ability to pinpoint where an intrusion occurred. In our 2006 paper, "Critical Infrastructure Security Confidence Through Automated Thermal Imaging", we showed how a highly effective security solution can be developed by integrating what are now available "next-generation technologies" which include: Thermal imaging for the highly effective detection of intruders in the dark of night and in challenging weather conditions at the sensor imaging level - we refer to this as the passive thermal sensor level detection building block Automated software detection for creating initial alerts - we refer to this as software level detection, the next level building block Immersive 3D visual assessment for situational awareness and to manage the reaction process - we refer to this as automated intelligent situational awareness, a third building block Wide area command and control capabilities to allow control from a remote location - we refer to this as the management and process control building block integrating together the lower level building elements. In addition, this paper describes three live installations of complete, total systems that incorporate visible and thermal cameras as well as advanced video analytics. Discussion of both system elements and design is extensive.
Future Earth: Reducing Loss By Automating Response to Earthquake Shaking
NASA Astrophysics Data System (ADS)
Allen, R. M.
2014-12-01
Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.
Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.
Stockton, David B; Santamaria, Fidel
2017-10-01
We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.
Data-Acquisition System With Remotely Adjustable Amplifiers
NASA Technical Reports Server (NTRS)
Nurge, Mark A.; Larson, William E.; Hallberg, Carl G.; Thayer, Steven W.; Ake, Jeffrey C.; Gleman, Stuart M.; Thompson, David L.; Medelius, Pedro J.; Crawford, Wayne A.; Vangilder, Richard M.;
1994-01-01
Improved data-acquisition system has both centralized and decentralized characteristics developed. Provides infrastructure for automation and standardization of operation, maintenance, calibration, and adjustment of many transducers. Increases efficiency by reducing need for diminishing work force of highly trained technicians to perform routine tasks. Large industrial and academic laboratory facilities benefit from systems like this one.
A Quantitative Microbial Risk Assessment (QMRA) infrastructure that automates the manual process of characterizing transport of pathogens and microorganisms, from the source of release to a point of exposure, has been developed by loosely configuring a set of modules and process-...
Integrated Air Surveillance Concept of Operations
2011-11-01
information, intelligence, weather data, and other situational awareness-related information. 4.2.4 Shared Services Automated processing of sensor and...other surveillance information will occur through shared services , accessible through an enterprise network infrastructure, that provide for collecting...also be provided, such as information discovery and translation. The IS architecture effort will identify specific shared services . Shared
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
... Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of... Automated Commercial Environment (ACE). The test's participant selection criteria are modified to reflect... (NCAP) test concerning Automated Commercial Environment (ACE) Simplified Entry functionality (Simplified...
A Security Monitoring Framework For Virtualization Based HEP Infrastructures
NASA Astrophysics Data System (ADS)
Gomez Ramirez, A.; Martinez Pedreira, M.; Grigoras, C.; Betev, L.; Lara, C.; Kebschull, U.;
2017-10-01
High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
Requirements-Driven Log Analysis Extended Abstract
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2012-01-01
Imagine that you are tasked to help a project improve their testing effort. In a realistic scenario it will quickly become clear, that having an impact is diffcult. First of all, it will likely be a challenge to suggest an alternative approach which is significantly more automated and/or more effective than current practice. The reality is that an average software system has a complex input/output behavior. An automated testing approach will have to auto-generate test cases, each being a pair (i; o) consisting of a test input i and an oracle o. The test input i has to be somewhat meaningful, and the oracle o can be very complicated to compute. Second, even in case where some testing technology has been developed that might improve current practice, it is then likely difficult to completely change the current behavior of the testing team unless the technique is obviously superior and does everything already done by existing technology. So is there an easier way to incorporate formal methods-based approaches than the full edged test revolution? Fortunately the answer is affirmative. A relatively simple approach is to benefit from possibly already existing logging infrastructure, which after all is part of most systems put in production. A log is a sequence of events, generated by special log recording statements, most often manually inserted in the code by the programmers. An event can be considered as a data record: a mapping from field names to values. We can analyze such a log using formal methods, for example checking it against a formal specification. This separates running the system for analyzing its behavior. It is not meant as an alternative to testing since it does not address the important in- put generation problem. However, it offers a solution which testing teams might accept since it has low impact on the existing process. A single person might be assigned to perform such log analysis, compared to the entire testing team changing behavior.
NASA Astrophysics Data System (ADS)
Nesmith, Kevin A.; Carver, Susan
2014-05-01
With the advancements in design processes down to the sub 7nm levels, the Electronic Design Automation industry appears to be coming to an end of advancements, as the size of the silicon atom becomes the limiting factor. Or is it? The commercial viability of mass-producing silicon photonics is bringing about the Optoelectronic Design Automation (OEDA) industry. With the science of photonics in its infancy, adding these circuits to ever-increasing complex electronic designs, will allow for new generations of advancements. Learning from the past 50 years of the EDA industry's mistakes and missed opportunities, the photonics industry is starting with electronic standards and extending them to become photonically aware. Adapting the use of pre-existing standards into this relatively new industry will allow for easier integration into the present infrastructure and faster time to market.
NASA Astrophysics Data System (ADS)
Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.
2012-06-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker to centralize all communication between modules. The result is an intelligent system able to extract and compute relevant information from the flow of operational data to provide real-time feedback to human experts who can promptly react when needed. The paper presents the design and implementation of the AAL project, together with the results of its usage as automated monitoring assistant for the ATLAS data taking infrastructure.
Inaugural Genomics Automation Congress and the coming deluge of sequencing data.
Creighton, Chad J
2010-10-01
Presentations at Select Biosciences's first 'Genomics Automation Congress' (Boston, MA, USA) in 2010 focused on next-generation sequencing and the platforms and methodology around them. The meeting provided an overview of sequencing technologies, both new and emerging. Speakers shared their recent work on applying sequencing to profile cells for various levels of biomolecular complexity, including DNA sequences, DNA copy, DNA methylation, mRNA and microRNA. With sequencing time and costs continuing to drop dramatically, a virtual explosion of very large sequencing datasets is at hand, which will probably present challenges and opportunities for high-level data analysis and interpretation, as well as for information technology infrastructure.
Software to Manage the Unmanageable
NASA Technical Reports Server (NTRS)
2005-01-01
In 1995, NASA s Jet Propulsion Laboratory (JPL) contracted Redmond, Washington-based Lucidoc Corporation, to design a technology infrastructure to automate the intersection between policy management and operations management with advanced software that automates document workflow, document status, and uniformity of document layout. JPL had very specific parameters for the software. It expected to store and catalog over 8,000 technical and procedural documents integrated with hundreds of processes. The project ended in 2000, but NASA still uses the resulting highly secure document management system, and Lucidoc has managed to help other organizations, large and small, with integrating document flow and operations management to ensure a compliance-ready culture.
Open Access: From Myth to Paradox
Ginsparg, Paul [Cornell University, Ithaca, New York, United States
2018-04-19
True open access to scientific publications not only gives readers the possibility to read articles without paying subscription, but also makes the material available for automated ingestion and harvesting by 3rd parties. Once articles and associated data become universally treatable as computable objects, openly available to 3rd party aggregators and value-added services, what new services can we expect, and how will they change the way that researchers interact with their scholarly communications infrastructure? I will discuss straightforward applications of existing ideas and services, including citation analysis, collaborative filtering, external database linkages, interoperability, and other forms of automated markup, and speculate on the sociology of the next generation of users.
Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas
ERIC Educational Resources Information Center
Hayden, Lance Alan
2009-01-01
This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... CBP with authority to conduct limited test programs or procedures designed to evaluate planned... aspects of this test, including the design, conduct and implementation of the test, in order to determine... Environment (ACE); Announcement of National Customs Automation Program Test of Automated Procedures for In...
Automatically generated code for relativistic inhomogeneous cosmologies
NASA Astrophysics Data System (ADS)
Bentivegna, Eloisa
2017-02-01
The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.
Highly Automated Arrival Management and Control System Suitable for Early NextGen
NASA Technical Reports Server (NTRS)
Swenson, Harry N.; Jung, Jaewoo
2013-01-01
This is a presentation of previously published work conducted in the development of the Terminal Area Precision Scheduling and Spacing (TAPSS) system. Included are concept and technical descriptions of the TAPSS system and results from human in the loop simulations conducted at Ames Research Center. The Terminal Area Precision Scheduling and Spacing system has demonstrated through research and extensive high-fidelity simulation studies to have benefits in airport arrival throughput, supporting efficient arrival descents, and enabling mixed aircraft navigation capability operations during periods of high congestion. NASA is currently porting the TAPSS system into the FAA TBFM and STARS system prototypes to ensure its ability to operate in the FAA automation Infrastructure. NASA ATM Demonstration Project is using the the TAPSS technologies to provide the ground-based automation tools to enable airborne Interval Management (IM) capabilities. NASA and the FAA have initiated a Research Transition Team to enable potential TAPSS and IM Technology Transfer.
Automated Data Quality Assurance using OGC Sensor Web Enablement Frameworks for Marine Observatories
NASA Astrophysics Data System (ADS)
Toma, Daniel; Bghiel, Ikram; del Rio, Joaquin; Hidalgo, Alberto; Carreras, Normandino; Manuel, Antoni
2014-05-01
Over the past years, environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent. Therefore, many sensor networks are increasingly deployed to monitor our environment. But due to the large number of sensor manufacturers, accompanying protocols and data encoding, automated integration and data quality assurance of diverse sensors in an observing systems is not straightforward, requiring development of data management code and manual tedious configuration. However, over the past few years it has been demonstrated that Open-Geospatial Consortium (OGC) frameworks can enable web services with fully-described sensor systems, including data processing, sensor characteristics and quality control tests and results. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The data management software which enables access to sensors, data processing and quality control tests has to be implemented and the results have to be manually mapped to the SWE models. In this contribution, we describe a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) OGC PUCK protocol - a simple standard embedded instrument protocol to store and retrieve directly from the devices the declarative description of sensor characteristics and quality control tests, (2) an automatic mechanism for data processing and quality control tests underlying the Sensor Web - the Sensor Interface Descriptor (SID) concept, as well as (3) a model for the declarative description of sensor which serves as a generic data management mechanism - designed as a profile and extension of OGC SWE's SensorML standard. We implement and evaluate our approach by applying it to the OBSEA Observatory, and can be used to demonstrate the ability to assess data quality for temperature, salinity, air pressure and wind speed and direction observations off the coast of Garraf, in the north-eastern Spain.
Massambu, Charles; Mwangi, Christina
2009-06-01
The rapid scale-up of the care and treatment programs in Tanzania during the preceding 4 years has greatly increased the demand for quality laboratory services for diagnosis of HIV and monitoring patients during antiretroviral therapy. Laboratory services were not in a position to cope with this demand owing to poor infrastructure, lack of human resources, erratic and/or lack of reagent supply and commodities, and slow manual technologies. With the limited human resources in the laboratory and the need for scaling up the care and treatment program, it became necessary to install automated equipment and train personnel for the increased volume of testing and new tests across all laboratory levels. With the numerous partners procuring equipment, the possibility of a multitude of equipment platforms with attendant challenges for procurement of reagents, maintenance of equipment, and quality assurance arose. Tanzania, therefore, had to harmonize laboratory tests and standardize laboratory equipment at different levels of the laboratory network. The process of harmonization of tests and standardization of equipment included assessment of laboratories, review of guidelines, development of a national laboratory operational plan, and stakeholder advocacy. This document outlines this process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piette, Mary Ann; Sezgen, Osman; Watson, David S.
This report describes the results of a research project to develop and evaluate the performance of new Automated Demand Response (Auto-DR) hardware and software technology in large facilities. Demand Response (DR) is a set of activities to reduce or shift electricity use to improve electric grid reliability, manage electricity costs, and ensure that customers receive signals that encourage load reduction during times when the electric grid is near its capacity. The two main drivers for widespread demand responsiveness are the prevention of future electricity crises and the reduction of electricity prices. Additional goals for price responsiveness include equity through costmore » of service pricing, and customer control of electricity usage and bills. The technology developed and evaluated in this report could be used to support numerous forms of DR programs and tariffs. For the purpose of this report, we have defined three levels of Demand Response automation. Manual Demand Response involves manually turning off lights or equipment; this can be a labor-intensive approach. Semi-Automated Response involves the use of building energy management control systems for load shedding, where a preprogrammed load shedding strategy is initiated by facilities staff. Fully-Automated Demand Response is initiated at a building or facility through receipt of an external communications signal--facility staff set up a pre-programmed load shedding strategy which is automatically initiated by the system without the need for human intervention. We have defined this approach to be Auto-DR. An important concept in Auto-DR is that a facility manager is able to ''opt out'' or ''override'' an individual DR event if it occurs at a time when the reduction in end-use services is not desirable. This project sought to improve the feasibility and nature of Auto-DR strategies in large facilities. The research focused on technology development, testing, characterization, and evaluation relating to Auto-DR. This evaluation also included the related decisionmaking perspectives of the facility owners and managers. Another goal of this project was to develop and test a real-time signal for automated demand response that provided a common communication infrastructure for diverse facilities. The six facilities recruited for this project were selected from the facilities that received CEC funds for new DR technology during California's 2000-2001 electricity crises (AB970 and SB-5X).« less
Supplanting Chinese Influence in Africa: The U.S. African Diaspora
2011-03-14
dysfunctional governments, to weak economies. Other critical factors include: inadequate basic infrastructure, high unemployment , rapid population growth...increased unemployment in many sectors. At the same time, local merchants and manufacturers are unable to favorably compete against Chinese...establish a pre-automated program modeled after a franchise plan. Selected individuals or teams will receive intense training and subsequently will be
2011-08-21
poultry, pork , beef, fish, and other meat products also are typically automated operations, done on electrically driven processing lines. 53 Food ...Infrastructure ..................................................... 18 Power Outage Impact on Consumables ( Food , Water, Medication...transportation, consumables ( food , water, and medication), and emergency services, are so highly dependent on reliable power supply from the grid, a
DOT National Transportation Integrated Search
2009-12-01
This volume focuses on one of the key components of the IRSV system, i.e., the AMBIS module. This module serves as one of : the tools used in this study to translate raw remote sensing data in the form of either high-resolution aerial photos or v...
NASA Astrophysics Data System (ADS)
Slota, S.; Khalsa, S. J. S.
2015-12-01
Infrastructures are the result of systems, networks, and inter-networks that accrete, overlay and segment one another over time. As a result, working infrastructures represent a broad heterogeneity of elements - data types, computational resources, material substrates (computing hardware, physical infrastructure, labs, physical information resources, etc.) as well as organizational and social functions which result in divergent outputs and goals. Cyber infrastructure's engineering often defaults to a separation of the social from the technical that results in the engineering succeeding in limited ways, or the exposure of unanticipated points of failure within the system. Studying the development of middleware intended to mediate interactions among systems within an earth systems science infrastructure exposes organizational, technical and standards-focused negotiations endemic to a fundamental trait of infrastructure: its characteristic invisibility in use. Intended to perform a core function within the EarthCube cyberinfrastructure, the development, governance and maintenance of an automated brokering system is a microcosm of large-scale infrastructural efforts. Points of potential system failure, regardless of the extent to which they are more social or more technical in nature, can be considered in terms of the reverse salient: a point of social and material configuration that momentarily lags behind the progress of an emerging or maturing infrastructure. The implementation of the BCube data broker has exposed reverse salients in regards to the overall EarthCube infrastructure (and the role of middleware brokering) in the form of organizational factors such as infrastructural alignment, maintenance and resilience; differing and incompatible practices of data discovery and evaluation among users and stakeholders; and a preponderance of local variations in the implementation of standards and authentication in data access. These issues are characterized by their role in increasing tension or friction among components that are on the path to convergence and may help to predict otherwise-occluded endogenous points of failure or non-adoption in the infrastructure.
WE-FG-201-02: Automated Treatment Planning for Low-Resource Settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Court, L.
Many low- and middle-income countries lack the resources and services to manage cancer, from screening and diagnosis to radiation therapy planning, treatment and quality assurance. The challenges in upgrading or introducing the needed services are enormous, and include severe shortages in equipment and trained staff. In this symposium, we will describe examples of technology and scientific research that have the potential to impact all these areas. These include: (1) the development of high-quality/low-cost colposcopes for cervical cancer screening, (2) the application of automated radiotherapy treatment planning to reduce staffing shortages, (3) the development of a novel radiotherapy treatment unit, andmore » (4) utilizing a cloud-based infrastructure to facilitate collaboration and QA. Learning Objectives: Understand some of the issues in cancer care in low- resource environments, including shortages in staff and equipment, and inadequate physical infrastructure for advanced radiotherapy. Understand the challenges in developing and deploying diagnostic and treatment devices and services for low-resource environments. Understand some of the emerging technological solutions for cancer management in LMICs. NCI; L. Court, NIH, Varian, Elekta; I. Feain, Ilana Feain is founder and CTO of Nano-X Pty Ltd.« less
Laboratory Testing of Demand-Response Enabled Household Appliances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparn, B.; Jin, X.; Earle, L.
2013-10-01
With the advent of the Advanced Metering Infrastructure (AMI) systems capable of two-way communications between the utility's grid and the building, there has been significant effort in the Automated Home Energy Management (AHEM) industry to develop capabilities that allow residential building systems to respond to utility demand events by temporarily reducing their electricity usage. Major appliance manufacturers are following suit by developing Home Area Network (HAN)-tied appliance suites that can take signals from the home's 'smart meter,' a.k.a. AMI meter, and adjust their run cycles accordingly. There are numerous strategies that can be employed by household appliances to respond tomore » demand-side management opportunities, and they could result in substantial reductions in electricity bills for the residents depending on the pricing structures used by the utilities to incent these types of responses.The first step to quantifying these end effects is to test these systems and their responses in simulated demand-response (DR) conditions while monitoring energy use and overall system performance.« less
Laboratory Testing of Demand-Response Enabled Household Appliances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparn, B.; Jin, X.; Earle, L.
2013-10-01
With the advent of the Advanced Metering Infrastructure (AMI) systems capable of two-way communications between the utility's grid and the building, there has been significant effort in the Automated Home Energy Management (AHEM) industry to develop capabilities that allow residential building systems to respond to utility demand events by temporarily reducing their electricity usage. Major appliance manufacturers are following suit by developing Home Area Network (HAN)-tied appliance suites that can take signals from the home's 'smart meter,' a.k.a. AMI meter, and adjust their run cycles accordingly. There are numerous strategies that can be employed by household appliances to respond tomore » demand-side management opportunities, and they could result in substantial reductions in electricity bills for the residents depending on the pricing structures used by the utilities to incent these types of responses. The first step to quantifying these end effects is to test these systems and their responses in simulated demand-response (DR) conditions while monitoring energy use and overall system performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.
The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’smore » t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.« less
NASA Astrophysics Data System (ADS)
Zhang, Ying; Lantz, Nicholas; Guindon, Bert; Jiao, Xianfen
2017-01-01
Accurate and frequent monitoring of land surface changes arising from oil and gas exploration and extraction is a key requirement for the responsible and sustainable development of these resources. Petroleum deposits typically extend over large geographic regions but much of the infrastructure required for oil and gas recovery takes the form of numerous small-scale features (e.g., well sites, access roads, etc.) scattered over the landscape. Increasing exploitation of oil and gas deposits will increase the presence of these disturbances in heavily populated regions. An object-based approach is proposed to utilize RapidEye satellite imagery to delineate well sites and related access roads in diverse complex landscapes, where land surface changes also arise from other human activities, such as forest logging and agriculture. A simplified object-based change vector approach, adaptable to operational use, is introduced to identify the disturbances on land based on red-green spectral response and spatial attributes of candidate object size and proximity to roads. Testing of the techniques has been undertaken with RapidEye multitemporal imagery in two test sites located at Alberta, Canada: one was a predominant natural forest landscape and the other landscape dominated by intensive agricultural activities. Accuracies of 84% and 73%, respectively, have been achieved for the identification of well site and access road infrastructure of the two sites based on fully automated processing. Limited manual relabeling of selected image segments can improve these accuracies to 95%.
Flight control system design factors for applying automated testing techniques
NASA Technical Reports Server (NTRS)
Sitz, Joel R.; Vernon, Todd H.
1990-01-01
Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.
Open Access: From Myth to Paradox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginsparg, Paul
2009-05-06
True open access to scientific publications not only gives readers the possibility to read articles without paying subscription, but also makes the material available for automated ingestion and harvesting by 3rd parties. Once articles and associated data become universally treatable as computable objects, openly available to 3rd party aggregators and value-added services, what new services can we expect, and how will they change the way that researchers interact with their scholarly communications infrastructure? I will discuss straightforward applications of existing ideas and services, including citation analysis, collaborative filtering, external database linkages, interoperability, and other forms of automated markup, and speculatemore » on the sociology of the next generation of users.« less
The automated ground network system
NASA Technical Reports Server (NTRS)
Smith, Miles T.; Militch, Peter N.
1993-01-01
The primary goal of the Automated Ground Network System (AGNS) project is to reduce Ground Network (GN) station life-cycle costs. To accomplish this goal, the AGNS project will employ an object-oriented approach to develop a new infrastructure that will permit continuous application of new technologies and methodologies to the Ground Network's class of problems. The AGNS project is a Total Quality (TQ) project. Through use of an open collaborative development environment, developers and users will have equal input into the end-to-end design and development process. This will permit direct user input and feedback and will enable rapid prototyping for requirements clarification. This paper describes the AGNS objectives, operations concept, and proposed design.
Facilities | Hydrogen and Fuel Cells | NREL
integration research. Photo of the Hydrogen Infrastructure Testing and Research Facility building, with hydrogen fueling station and fuel cell vehicles. Hydrogen Infrastructure Testing and Research Facility The Hydrogen Infrastructure Testing and Research Facility (HITRF) at the ESIF combines electrolyzers, a
Parallel digital forensics infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, Lorie M.; Duggan, David Patrick
2009-10-01
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less
21 CFR 864.9175 - Automated blood grouping and antibody test system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated blood grouping and antibody test system...
21 CFR 864.9175 - Automated blood grouping and antibody test system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood grouping and antibody test system...
21 CFR 864.9175 - Automated blood grouping and antibody test system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood grouping and antibody test system...
21 CFR 864.9175 - Automated blood grouping and antibody test system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated blood grouping and antibody test system...
21 CFR 864.9175 - Automated blood grouping and antibody test system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated blood grouping and antibody test system...
Hanrahan, Lawrence P.; Anderson, Henry A.; Busby, Brian; Bekkedal, Marni; Sieger, Thomas; Stephenson, Laura; Knobeloch, Lynda; Werner, Mark; Imm, Pamela; Olson, Joseph
2004-01-01
In this article we describe the development of an information system for environmental childhood cancer surveillance. The Wisconsin Cancer Registry annually receives more than 25,000 incident case reports. Approximately 269 cases per year involve children. Over time, there has been considerable community interest in understanding the role the environment plays as a cause of these cancer cases. Wisconsin’s Public Health Information Network (WI-PHIN) is a robust web portal integrating both Health Alert Network and National Electronic Disease Surveillance System components. WI-PHIN is the information technology platform for all public health surveillance programs. Functions include the secure, automated exchange of cancer case data between public health–based and hospital-based cancer registrars; web-based supplemental data entry for environmental exposure confirmation and hypothesis testing; automated data analysis, visualization, and exposure–outcome record linkage; directories of public health and clinical personnel for role-based access control of sensitive surveillance information; public health information dissemination and alerting; and information technology security and critical infrastructure protection. For hypothesis generation, cancer case data are sent electronically to WI-PHIN and populate the integrated data repository. Environmental data are linked and the exposure–disease relationships are explored using statistical tools for ecologic exposure risk assessment. For hypothesis testing, case–control interviews collect exposure histories, including parental employment and residential histories. This information technology approach can thus serve as the basis for building a comprehensive system to assess environmental cancer etiology. PMID:15471739
NASA Astrophysics Data System (ADS)
Azarbayejani, M.; Jalalpour, M.; El-Osery, A. I.; Reda Taha, M. M.
2011-08-01
In this paper, an innovative field application of a structural health monitoring (SHM) system using field programmable gate array (FPGA) technology and wireless communication is presented. The new SHM system was installed to monitor a reinforced concrete (RC) bridge on Interstate 40 (I-40) in Tucumcari, New Mexico. This newly installed system allows continuous remote monitoring of this bridge using solar power. Details of the SHM component design and installation are discussed. The integration of FPGA and solar power technologies make it possible to remotely monitor infrastructure with limited access to power. Furthermore, the use of FPGA technology enables smart monitoring where data communication takes place on-need (when damage warning signs are met) and on-demand for periodic monitoring of the bridge. Such a system enables a significant cut in communication cost and power demands which are two challenges during SHM operation. Finally, a three-dimensional finite element (FE) model of the bridge was developed and calibrated using a static loading field test. This model is then used for simulating damage occurrence on the bridge. Using the proposed automation process for SHM will reduce human intervention significantly and can save millions of dollars currently spent on prescheduled inspection of critical infrastructure worldwide.
Laying the foundation for a digital Nova Scotia
NASA Astrophysics Data System (ADS)
Bond, J.
2016-04-01
In 2013, the Province of Nova Scotia began an effort to modernize its coordinate referencing infrastructure known as the Nova Scotia Coordinate Referencing System (NSCRS). At that time, 8, active GPS stations were installed in southwest Nova Scotia to evaluate the technology's ability to address the Province's coordinate referencing needs. The success of the test phase helped build a business case to implement the technology across the entire Province. It is anticipated that by the end of 2015, 40 active GPS stations will be in place across Nova Scotia. This infrastructure, known as the Nova Scotia Active Control Stations (NSACS) network, will allow for instantaneous, centimetre level positioning across the Province. Originally designed to address the needs of the surveying community, the technology has also proven to have applications in mapping, machine automation, agriculture, navigation, emergency response, earthquake detection and other areas. In the foreseeable future, all spatial data sets captured in Nova Scotia will be either directly or indirectly derived from the NSACS network. The technology will promote high accuracy and homogenous spatial data sets across the Province. The technology behind the NSACS and the development of the system are described. Examples of how the technology is contributing to a digital Nova Scotia are presented. Future applications of the technology are also considered.
Gee, Adrian P.; Richman, Sara; Durett, April; McKenna, David; Traverse, Jay; Henry, Timothy; Fisk, Diann; Pepine, Carl; Bloom, Jeannette; Willerson, James; Prater, Karen; Zhao, David; Koç, Jane Reese; Ellis, Steven; Taylor, Doris; Cogle, Christopher; Moyé, Lemuel; Simari, Robert; Skarlatos, Sonia
2013-01-01
Background Aims Multi-center cellular therapy clinical trials require the establishment and implementation of standardized cell processing protocols and associated quality control mechanisms. The aims here were to develop such an infrastructure in support of the Cardiovascular Cell Therapy Research Network (CCTRN) and to report on the results of processing for the first 60 patients. Methods Standardized cell preparations, consisting of autologous bone marrow mononuclear cells, prepared using the Sepax device were manufactured at each of the five processing facilities that supported the clinical treatment centers. Processing staff underwent centralized training that included proficiency evaluation. Quality was subsequently monitored by a central quality control program that included product evaluation by the CCTRN biorepositories. Results Data from the first 60 procedures demonstrate that uniform products, that met all release criteria, could be manufactured at all five sites within 7 hours of receipt of the bone marrow. Uniformity was facilitated by use of the automated systems (the Sepax for processing and the Endosafe device for endotoxin testing), standardized procedures and centralized quality control. Conclusions Complex multicenter cell therapy and regenerative medicine protocols can, where necessary, successfully utilize local processing facilities once an effective infrastructure is in place to provide training, and quality control. PMID:20524773
Van Eaton, Erik G; Devlin, Allison B; Devine, Emily Beth; Flum, David R; Tarczy-Hornoch, Peter
2014-01-01
Delivering more appropriate, safer, and highly effective health care is the goal of a learning health care system. The Agency for Healthcare Research and Quality (AHRQ) funded enhanced registry projects: (1) to create and analyze valid data for comparative effectiveness research (CER); and (2) to enhance the ability to monitor and advance clinical quality improvement (QI). This case report describes barriers and solutions from one state-wide enhanced registry project. The Comparative Effectiveness Research and Translation Network (CERTAIN) deployed the commercially available Amalga Unified Intelligence System™ (Amalga) as a central data repository to enhance an existing QI registry (the Automation Project). An eight-step implementation process included hospital recruitment, technical electronic health record (EHR) review, hospital-specific interface planning, data ingestion, and validation. Data ownership and security protocols were established, along with formal methods to separate data management for QI purposes and research purposes. Sustainability would come from lowered chart review costs and the hospital's desire to invest in the infrastructure after trying it. CERTAIN approached 19 hospitals in Washington State operating within 12 unaffiliated health care systems for the Automation Project. Five of the 19 completed all implementation steps. Four hospitals did not participate due to lack of perceived institutional value. Ten hospitals did not participate because their information technology (IT) departments were oversubscribed (e.g., too busy with Meaningful Use upgrades). One organization representing 22 additional hospitals expressed interest, but was unable to overcome data governance barriers in time. Questions about data use for QI versus research were resolved in a widely adopted project framework. Hospitals restricted data delivery to a subset of patients, introducing substantial technical challenges. Overcoming challenges of idiosyncratic EHR implementations required each hospital to devote more IT resources than were predicted. Cost savings did not meet projections because of the increased IT resource requirements and a different source of lowered chart review costs. CERTAIN succeeded in recruiting unaffiliated hospitals into the Automation Project to create an enhanced registry to achieve AHRQ goals. This case report describes several distinct barriers to central data aggregation for QI and CER across unaffiliated hospitals: (1) competition for limited on-site IT expertise, (2) concerns about data use for QI versus research, (3) restrictions on data automation to a defined subset of patients, and (4) unpredictable resource needs because of idiosyncrasies among unaffiliated hospitals in how EHR data are coded, stored, and made available for transmission-even between hospitals using the same vendor's EHR. Therefore, even a fully optimized automation infrastructure would still not achieve complete automation. The Automation Project was unable to align sufficiently with internal hospital objectives, so it could not show a compelling case for sustainability.
Achieving and Sustaining Automated Health Data Linkages for Learning Systems: Barriers and Solutions
Van Eaton, Erik G.; Devlin, Allison B.; Devine, Emily Beth; Flum, David R.; Tarczy-Hornoch, Peter
2014-01-01
Introduction: Delivering more appropriate, safer, and highly effective health care is the goal of a learning health care system. The Agency for Healthcare Research and Quality (AHRQ) funded enhanced registry projects: (1) to create and analyze valid data for comparative effectiveness research (CER); and (2) to enhance the ability to monitor and advance clinical quality improvement (QI). This case report describes barriers and solutions from one state-wide enhanced registry project. Methods: The Comparative Effectiveness Research and Translation Network (CERTAIN) deployed the commercially available Amalga Unified Intelligence System™ (Amalga) as a central data repository to enhance an existing QI registry (the Automation Project). An eight-step implementation process included hospital recruitment, technical electronic health record (EHR) review, hospital-specific interface planning, data ingestion, and validation. Data ownership and security protocols were established, along with formal methods to separate data management for QI purposes and research purposes. Sustainability would come from lowered chart review costs and the hospital’s desire to invest in the infrastructure after trying it. Findings: CERTAIN approached 19 hospitals in Washington State operating within 12 unaffiliated health care systems for the Automation Project. Five of the 19 completed all implementation steps. Four hospitals did not participate due to lack of perceived institutional value. Ten hospitals did not participate because their information technology (IT) departments were oversubscribed (e.g., too busy with Meaningful Use upgrades). One organization representing 22 additional hospitals expressed interest, but was unable to overcome data governance barriers in time. Questions about data use for QI versus research were resolved in a widely adopted project framework. Hospitals restricted data delivery to a subset of patients, introducing substantial technical challenges. Overcoming challenges of idiosyncratic EHR implementations required each hospital to devote more IT resources than were predicted. Cost savings did not meet projections because of the increased IT resource requirements and a different source of lowered chart review costs. Discussion: CERTAIN succeeded in recruiting unaffiliated hospitals into the Automation Project to create an enhanced registry to achieve AHRQ goals. This case report describes several distinct barriers to central data aggregation for QI and CER across unaffiliated hospitals: (1) competition for limited on-site IT expertise, (2) concerns about data use for QI versus research, (3) restrictions on data automation to a defined subset of patients, and (4) unpredictable resource needs because of idiosyncrasies among unaffiliated hospitals in how EHR data are coded, stored, and made available for transmission—even between hospitals using the same vendor’s EHR. Therefore, even a fully optimized automation infrastructure would still not achieve complete automation. The Automation Project was unable to align sufficiently with internal hospital objectives, so it could not show a compelling case for sustainability. PMID:25848606
Model Based Verification of Cyber Range Event Environments
2015-11-13
Commercial and Open Source Systems," in SOSP, Cascais, Portugal, 2011. [3] Sanjai Narain, Sharad Malik, and Ehab Al-Shaer, "Towards Eliminating...Configuration Errors in Cyber Infrastructure," in 4th IEEE Symposium on Configuration Analytics and Automation, Arlington, VA, 2011. [4] Sanjai Narain...Verlag, 2010. [5] Sanjai Narain, "Network Configuration Management via Model Finding," in 19th Large Installation System Administration Conference, San
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center propose a joint project. The goals are to enable scientific workflows of stakeholders to run on multiple cloud resources by use of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federat ion of Cloud Resources , and (c) High-Throughput Fabric Virtualization. This is a matching fund project in which Fermilab and KISTI will contribute equal resources .
Ibrahim, Sarah A; Martini, Luigi
2014-08-01
Dissolution method transfer is a complicated yet common process in the pharmaceutical industry. With increased pharmaceutical product manufacturing and dissolution acceptance requirements, dissolution testing has become one of the most labor-intensive quality control testing methods. There is an increased trend for automation in dissolution testing, particularly for large pharmaceutical companies to reduce variability and increase personnel efficiency. There is no official guideline for dissolution testing method transfer from a manual, semi-automated, to automated dissolution tester. In this study, a manual multipoint dissolution testing procedure for an enteric-coated aspirin tablet was transferred effectively and reproducibly to a fully automated dissolution testing device, RoboDis II. Enteric-coated aspirin samples were used as a model formulation to assess the feasibility and accuracy of media pH change during continuous automated dissolution testing. Several RoboDis II parameters were evaluated to ensure the integrity and equivalency of dissolution method transfer from a manual dissolution tester. This current study provides a systematic outline for the transfer of the manual dissolution testing protocol to an automated dissolution tester. This study further supports that automated dissolution testers compliant with regulatory requirements and similar to manual dissolution testers facilitate method transfer. © 2014 Society for Laboratory Automation and Screening.
Using container orchestration to improve service management at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Lahiff, Andrew; Collier, Ian
2017-10-01
In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.
The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog)
MacArthur, Jacqueline; Bowler, Emily; Cerezo, Maria; Gil, Laurent; Hall, Peggy; Hastings, Emma; Junkins, Heather; McMahon, Aoife; Milano, Annalisa; Morales, Joannella; Pendlington, Zoe May; Welter, Danielle; Burdett, Tony; Hindorff, Lucia; Flicek, Paul; Cunningham, Fiona; Parkinson, Helen
2017-01-01
The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future. PMID:27899670
IT Requirements Integration in High-Rise Construction Design Projects
NASA Astrophysics Data System (ADS)
Levina, Anastasia; Ilin, Igor; Esedulaev, Rustam
2018-03-01
The paper discusses the growing role of IT support for the operation of modern high-rise buildings, due to the complexity of managing engineering systems of buildings and the requirements of consumers for the IT infrastructure. The existing regulatory framework for the development of design documentation for construction, including high-rise buildings, is analyzed, and the lack of coherence in the development of this documentation with the requirements for the creation of an automated management system and the corresponding IT infrastructure is stated. The lack of integration between these areas is the cause of delays and inefficiencies both at the design stage and at the stage of putting the building into operation. The paper proposes an approach to coordinate the requirements of the IT infrastructure of high-rise buildings and design documentation for construction. The solution to this problem is possible within the framework of the enterprise architecture concept by coordinating the requirements of the IT and technological layers at the design stage of the construction.
Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds
NASA Astrophysics Data System (ADS)
Li, Rui; Chen, Lei; Li, Wen-Syan
Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, Marc W; Wood, Eric W
The plug-in electric vehicle (PEV) market is experiencing rapid growth with dozens of battery electric (BEV) and plug-in hybrid electric (PHEV) models already available and billions of dollars being invested by automotive manufacturers in the PEV space. Electric range is increasing thanks to larger and more advanced batteries and significant infrastructure investments are being made to enable higher power fast charging. Costs are falling and PEVs are becoming more competitive with conventional vehicles. Moreover, new technologies such as connectivity and automation hold the promise of enhancing the value proposition of PEVs. This presentation outlines a suite of projects funded bymore » the U.S. Department of Energy's Vehicle Technology Office to conduct assessments of the economic value and charging infrastructure requirements of the evolving PEV market. Individual assessments include national evaluations of PEV economic value (assuming 73M PEVs on the road in 2035), national analysis of charging infrastructure requirements (with community and corridor level resolution), and case studies of PEV ownership in Columbus, OH and Massachusetts.« less
Intelligent behaviors through vehicle-to-vehicle and vehicle-to-infrastructure communication
NASA Astrophysics Data System (ADS)
Garcia, Richard D.; Sturgeon, Purser; Brown, Mike
2012-06-01
The last decade has seen a significant increase in intelligent safety devices on private automobiles. These devices have both increased and augmented the situational awareness of the driver and in some cases provided automated vehicle responses. To date almost all intelligent safety devices have relied on data directly perceived by the vehicle. This constraint has a direct impact on the types of solutions available to the vehicle. In an effort to improve the safety options available to a vehicle, numerous research laboratories and government agencies are investing time and resources into connecting vehicles to each other and to infrastructure-based devices. This work details several efforts in both the commercial vehicle and the private auto industries to increase vehicle safety and driver situational awareness through vehicle-to-vehicle and vehicle-to-infrastructure communication. It will specifically discuss intelligent behaviors being designed to automatically disable non-compliant vehicles, warn tractor trailer vehicles of unsafe lane maneuvers such as lane changes, passing, and merging, and alert drivers to non-line-of-sight emergencies.
Automation of diagnostic genetic testing: mutation detection by cyclic minisequencing.
Alagrund, Katariina; Orpana, Arto K
2014-01-01
The rising role of nucleic acid testing in clinical decision making is creating a need for efficient and automated diagnostic nucleic acid test platforms. Clinical use of nucleic acid testing sets demands for shorter turnaround times (TATs), lower production costs and robust, reliable methods that can easily adopt new test panels and is able to run rare tests in random access principle. Here we present a novel home-brew laboratory automation platform for diagnostic mutation testing. This platform is based on the cyclic minisequecing (cMS) and two color near-infrared (NIR) detection. Pipetting is automated using Tecan Freedom EVO pipetting robots and all assays are performed in 384-well micro plate format. The automation platform includes a data processing system, controlling all procedures, and automated patient result reporting to the hospital information system. We have found automated cMS a reliable, inexpensive and robust method for nucleic acid testing for a wide variety of diagnostic tests. The platform is currently in clinical use for over 80 mutations or polymorphisms. Additionally to tests performed from blood samples, the system performs also epigenetic test for the methylation of the MGMT gene promoter, and companion diagnostic tests for analysis of KRAS and BRAF gene mutations from formalin fixed and paraffin embedded tumor samples. Automation of genetic test reporting is found reliable and efficient decreasing the work load of academic personnel.
NASA Automated Rendezvous and Capture Review. Executive summary
NASA Technical Reports Server (NTRS)
1991-01-01
In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure.
2015-05-01
Director, Operational Test and Evaluation Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial...Operational Test and Evaluation Report May 2015 This report on the Department of Defense (DOD) Automated Biometric Identification System...COVERED - 4. TITLE AND SUBTITLE Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial Operational Test
Launch Control System Software Development System Automation Testing
NASA Technical Reports Server (NTRS)
Hwang, Andrew
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.
Analysis of CERN computing infrastructure and monitoring data
NASA Astrophysics Data System (ADS)
Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.
2015-12-01
Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.
What Is an Automated External Defibrillator?
ANSWERS by heart Treatments + Tests What Is an Automated External Defibrillator? An automated external defibrillator (AED) is a lightweight, portable device ... ANSWERS by heart Treatments + Tests What Is an Automated External Defibrillator? detect a rhythm that should be ...
From an automated flight-test management system to a flight-test engineer's workstation
NASA Technical Reports Server (NTRS)
Duke, E. L.; Brumbaugh, R. W.; Hewett, M. D.; Tartt, D. M.
1992-01-01
Described here are the capabilities and evolution of a flight-test engineer's workstation (called TEST PLAN) from an automated flight-test management system. The concept and capabilities of the automated flight-test management system are explored and discussed to illustrate the value of advanced system prototyping and evolutionary software development.
Long Island Smart Energy Corridor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mui, Ming
The Long Island Power Authority (LIPA) has teamed with Stony Brook University (Stony Brook or SBU) and Farmingdale State College (Farmingdale or FSC), two branches of the State University of New York (SUNY), to create a “Smart Energy Corridor.” The project, located along the Route 110 business corridor on Long Island, New York, demonstrated the integration of a suite of Smart Grid technologies from substations to end-use loads. The Smart Energy Corridor Project included the following key features: -TECHNOLOGY: Demonstrated a full range of smart energy technologies, including substations and distribution feeder automation, fiber and radio communications backbone, advanced meteringmore » infrastructure (AM”), meter data management (MDM) system (which LIPA implemented outside of this project), field tools automation, customer-level energy management including automated energy management systems, and integration with distributed generation and plug-in hybrid electric vehicles. -MARKETING: A rigorous market test that identified customer response to an alternative time-of-use pricing plan and varying levels of information and analytical support. -CYBER SECURITY: Tested cyber security vulnerabilities in Smart Grid hardware, network, and application layers. Developed recommendations for policies, procedures, and technical controls to prevent or foil cyber-attacks and to harden the Smart Grid infrastructure. -RELIABILITY: Leveraged new Smart Grid-enabled data to increase system efficiency and reliability. Developed enhanced load forecasting, phase balancing, and voltage control techniques designed to work hand-in-hand with the Smart Grid technologies. -OUTREACH: Implemented public outreach and educational initiatives that were linked directly to the demonstration of Smart Grid technologies, tools, techniques, and system configurations. This included creation of full-scale operating models demonstrating application of Smart Grid technologies in business and residential settings. Farmingdale State College held three international conferences on energy and sustainability and Smart Grid related technologies and policies. These conferences, in addition to public seminars increased understanding and acceptance of Smart Grid transformation by the general public, business, industry, and municipalities in the Long Island and greater New York region. - JOB CREATION: Provided training for the Smart Grid and clean energy jobs of the future at both Farmingdale and Stony Brook. Stony Brook focused its “Cradle to Fortune 500” suite of economic development resources on the opportunities emerging from the project, helping to create new technologies, new businesses, and new jobs. To achieve these features, LIPA and its sub-recipients, FSC and SBU, each have separate but complementary objectives. At LIPA, the Smart Energy Corridor (1) meant validating Smart Grid technologies; (2) quantifying Smart Grid costs and benefits; and (3) providing insights into how Smart Grid applications can be better implemented, readily adapted, and replicated in individual homes and businesses. LIPA installed 2,550 AMI meters (exceeding the 500 AMI meters in the original plan), created three “smart” substations serving the Corridor, and installed additional distribution automation elements including two-way communications and digital controls over various feeders and capacitor banks. It gathered and analyzed customer behavior information on how they responded to a new “smart” TOU rate and to various levels of information and analytical tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...
2017-11-18
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations.
Khalifa, Tarek; Abdrabou, Atef; Shaban, Khaled; Gaouda, A M
2018-05-11
Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.
Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations
Khalifa, Tarek; Abdrabou, Atef; Gaouda, A. M.
2018-01-01
Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids. PMID:29751633
Solti, Imre; Aaronson, Barry; Fletcher, Grant; Solti, Magdolna; Gennari, John H; Cooper, Melissa; Payne, Tom
2008-11-06
Detailed problem lists that comply with JCAHO requirements are important components of electronic health records. Besides improving continuity of care electronic problem lists could serve as foundation infrastructure for clinical trial recruitment, research, biosurveillance and billing informatics modules. However, physicians rarely maintain problem lists. Our team is building a system using MetaMap and UMLS to automatically populate the problem list. We report our early results evaluating the application. Three physicians generated gold standard problem lists for 100 cardiology ambulatory progress notes. Our application had 88% sensitivity and 66% precision using a non-modified UMLS dataset. The systemâs misses concentrated in the group of ambiguous problem list entries (Chi-square=27.12 p<0.0001). In addition to the explicit entries, the notes included 10% implicit entry candidates. MetaMap and UMLS are readily applicable to automate the problem list. Ambiguity in medical documents has consequences for performance evaluation of automated systems.
Flight Software for the LADEE Mission
NASA Technical Reports Server (NTRS)
Cannon, Howard N.
2015-01-01
The Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft was launched on September 6, 2013, and completed its mission on April 17, 2014 with a directed impact to the Lunar Surface. Its primary goals were to examine the lunar atmosphere, measure lunar dust, and to demonstrate high rate laser communications. The LADEE mission was a resounding success, achieving all mission objectives, much of which can be attributed to careful planning and preparation. This paper discusses some of the highlights from the mission, and then discusses the techniques used for developing the onboard Flight Software. A large emphasis for the Flight Software was to develop it within tight schedule and cost constraints. To accomplish this, the Flight Software team leveraged heritage software, used model based development techniques, and utilized an automated test infrastructure. This resulted in the software being delivered on time and within budget. The resulting software was able to meet all system requirements, and had very problems in flight.
Personal Computer-less (PC-less) Microcontroller Training Kit
NASA Astrophysics Data System (ADS)
Somantri, Y.; Wahyudin, D.; Fushilat, I.
2018-02-01
The need of microcontroller training kit is necessary for practical work of students of electrical engineering education. However, to use available training kit not only costly but also does not meet the need of laboratory requirements. An affordable and portable microcontroller kit could answer such problem. This paper explains the design and development of Personal Computer Less (PC-Less) Microcontroller Training Kit. It was developed based on Lattepanda processor and Arduino microcontroller as target. The training kit equipped with advanced input-output interfaces that adopted the concept of low cost and low power system. The preliminary usability testing proved this device can be used as a tool for microcontroller programming and industrial automation training. By adopting the concept of portability, the device could be operated in the rural area which electricity and computer infrastructure are limited. Furthermore, the training kit is suitable for student of electrical engineering student from university and vocational high school.
Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F
2009-02-01
Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality.
Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena
2015-04-01
To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.
Jean Louis, Frantz; Buteau, Josiane; Boncy, Jacques; Anselme, Renette; Stanislas, Magalie; Nagel, Mary C; Juin, Stanley; Charles, Macarthur; Burris, Robert; Antoine, Eva; Yang, Chunfu; Kalou, Mireille; Vertefeuille, John; Marston, Barbara J; Lowrance, David W; Deyde, Varough
2017-10-01
Before the 2010 devastating earthquake and cholera outbreak, Haiti's public health laboratory systems were weak and services were limited. There was no national laboratory strategic plan and only minimal coordination across the laboratory network. Laboratory capacity was further weakened by the destruction of over 25 laboratories and testing sites at the departmental and peripheral levels and the loss of life among the laboratory health-care workers. However, since 2010, tremendous progress has been made in building stronger laboratory infrastructure and training a qualified public health laboratory workforce across the country, allowing for decentralization of access to quality-assured services. Major achievements include development and implementation of a national laboratory strategic plan with a formalized and strengthened laboratory network; introduction of automation of testing to ensure better quality of results and diversify the menu of tests to effectively respond to outbreaks; expansion of molecular testing for tuberculosis, human immunodeficiency virus, malaria, diarrheal and respiratory diseases; establishment of laboratory-based surveillance of epidemic-prone diseases; and improvement of the overall quality of testing. Nonetheless, the progress and gains made remain fragile and require the full ownership and continuous investment from the Haitian government to sustain these successes and achievements.
Jean Louis, Frantz; Buteau, Josiane; Boncy, Jacques; Anselme, Renette; Stanislas, Magalie; Nagel, Mary C.; Juin, Stanley; Charles, Macarthur; Burris, Robert; Antoine, Eva; Yang, Chunfu; Kalou, Mireille; Vertefeuille, John; Marston, Barbara J.; Lowrance, David W.; Deyde, Varough
2017-01-01
Abstract. Before the 2010 devastating earthquake and cholera outbreak, Haiti’s public health laboratory systems were weak and services were limited. There was no national laboratory strategic plan and only minimal coordination across the laboratory network. Laboratory capacity was further weakened by the destruction of over 25 laboratories and testing sites at the departmental and peripheral levels and the loss of life among the laboratory health-care workers. However, since 2010, tremendous progress has been made in building stronger laboratory infrastructure and training a qualified public health laboratory workforce across the country, allowing for decentralization of access to quality-assured services. Major achievements include development and implementation of a national laboratory strategic plan with a formalized and strengthened laboratory network; introduction of automation of testing to ensure better quality of results and diversify the menu of tests to effectively respond to outbreaks; expansion of molecular testing for tuberculosis, human immunodeficiency virus, malaria, diarrheal and respiratory diseases; establishment of laboratory-based surveillance of epidemic-prone diseases; and improvement of the overall quality of testing. Nonetheless, the progress and gains made remain fragile and require the full ownership and continuous investment from the Haitian government to sustain these successes and achievements. PMID:29064354
Integrating Test-Form Formatting into Automated Test Assembly
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2013-01-01
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Publication of sensor data in the long-term environmental monitoring infrastructure TERENO
NASA Astrophysics Data System (ADS)
Stender, V.; Schroeder, M.; Klump, J. F.
2014-12-01
Terrestrial Environmental Observatories (TERENO) is an interdisciplinary and long-term research project spanning an Earth observation network across Germany. It includes four test sites within Germany from the North German lowlands to the Bavarian Alps and is operated by six research centers of the Helmholtz Association. TERENO Northeast is one of the sub-observatories of TERENO and is operated by the German Research Centre for Geosciences GFZ in Potsdam. This observatory investigates geoecological processes in the northeastern lowland of Germany by collecting large amounts of environmentally relevant data. The success of long-term projects like TERENO depends on well-organized data management, data exchange between the partners involved and on the availability of the captured data. Data discovery and dissemination are facilitated not only through data portals of the regional TERENO observatories but also through a common spatial data infrastructure TEODOOR (TEreno Online Data repOsitORry). TEODOOR bundles the data, provided by the different web services of the single observatories, and provides tools for data discovery, visualization and data access. The TERENO Northeast data infrastructure integrates data from more than 200 instruments and makes data available through standard web services. TEODOOR accesses the OGC Sensor Web Enablement (SWE) interfaces offered by the regional observatories. In addition to the SWE interface, TERENO Northeast also publishes time series of environmental sensor data through the online research data publication platform DataCite. The metadata required by DataCite are created in an automated process by extracting information from the SWE SensorML to create ISO 19115 compliant metadata. The GFZ data management tool kit panMetaDocs is used to register Digital Object Identifiers (DOI) and preserve file based datasets. In addition to DOI, the International Geo Sample Numbers (IGSN) is used to uniquely identify research specimens.
An automated testing tool for traffic signal controller functionalities.
DOT National Transportation Integrated Search
2010-03-01
The purpose of this project was to develop an automated tool that facilitates testing of traffic controller functionality using controller interface device (CID) technology. Benefits of such automated testers to traffic engineers include reduced test...
From an automated flight-test management system to a flight-test engineer's workstation
NASA Technical Reports Server (NTRS)
Duke, E. L.; Brumbaugh, Randal W.; Hewett, M. D.; Tartt, D. M.
1991-01-01
The capabilities and evolution is described of a flight engineer's workstation (called TEST-PLAN) from an automated flight test management system. The concept and capabilities of the automated flight test management systems are explored and discussed to illustrate the value of advanced system prototyping and evolutionary software development.
Automated Test-Form Generation
ERIC Educational Resources Information Center
van der Linden, Wim J.; Diao, Qi
2011-01-01
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Case management information systems: how to put the pieces together now and beyond year 2000.
Matthews, Pamela
2002-01-01
The case management process is a critical management and operational component in the delivery of customer services across the patient care continuum. Case management has transcended time and will continue to be a viable infrastructure process for successful organizations in the future. A key component of the case management infrastructure is information systems and technology support. Case management challenges include effective deployment and use of systems and technology. As more sophisticated, integrated systems are made available, case managers can use these tools to continue to expand effectively beyond the patient's episodic event to provide greater levels of cradle-to-grave management of healthcare. This article explores methods for defining case management system needs and identifying automation options available to the case manager.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostick, Debra A.; Hexel, Cole R.; Ticknor, Brian W.
2016-11-01
To shorten the lengthy and costly manual chemical purification procedures, sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment. This addresses a serious need in the nuclear safeguards community to debottleneck the separation of U and Pu in environmental samples—currently performed by overburdened chemists—with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on current COTS equipment that was modified for U/Pu separations utilizing Eichrom™ TEVA and UTEVA resins. Initial verification of individual columns yielded small elution volumes with consistent elution profiles and good recovery. Combined columnmore » calibration demonstrated ample separation without crosscontamination of the eluent. Automated packing and unpacking of the built-in columns initially showed >15% deviation in resin loading by weight, which can lead to inconsistent separations. Optimization of the packing and unpacking methods led to a reduction in the variability of the packed resin to less than 5% daily. The reproducibility of the automated system was tested with samples containing 30 ng U and 15 pg Pu, which were separated in a series with alternating reagent blanks. These experiments showed very good washout of both the resin and the sample from the columns as evidenced by low blank values. Analysis of the major and minor isotope ratios for U and Pu provided values well within data quality limits for the International Atomic Energy Agency. Additionally, system process blanks spiked with 233U and 244Pu tracers were separated using the automated system after it was moved outside of a clean room and yielded levels equivalent to clean room blanks, confirming that the system can produce high quality results without the need for expensive clean room infrastructure. Comparison of the amount of personnel time necessary for successful manual vs. automated chemical separations showed a significant decrease in hands-on time from 9.8 hours to 35 minutes for seven samples, respectively. This documented time savings and reduced labor translates to a significant cost savings per sample. Overall, the system will enable faster sample reporting times at reduced costs by limiting personnel hours dedicated to the chemical separation.« less
Wesnes, Keith A
2014-01-01
The lack of progress over the last decade in developing treatments for Alzheimer's disease has called into question the quality of the cognitive assessments used while also shifting the emphasis from treatment to prophylaxis by studying the disorder at earlier stages, even prior to the development of cognitive symptoms. This has led various groups to seek cognitive tests which are more sensitive than those currently used and which can be meaningfully administered to individuals with mild or even no cognitive impairment. Although computerized tests have long been used in this field, they have made little inroads compared with non-automated tests. This review attempts to put in perspective the relative utilities of automated and non-automated tests of cognitive function in therapeutic trials of pathological aging and the dementias. Also by a review of the automation of cognitive tests over the last 150 years, it is hoped that the notion that such procedures are novel compared with pencil-and-paper testing will be dispelled. Furthermore, data will be presented to illustrate that older individuals and patients with dementia are neither stressed nor disadvantaged when tested with appropriately developed computerized methods. An important aspect of automated testing is that it can assess all aspects of task performance, including the speed of cognitive processes, and data are presented on the advantages this can confer in clinical trials. The ultimate objectives of the review are to encourage decision making in the field to move away from the automated/non-automated dichotomy and to develop criteria pertinent to each trial against which all available procedures are evaluated. If we are to make serious progress in this area, we must use the best tools available, and the evidence suggests that automated testing has earned the right to be judged against the same criteria as non-automated tests.
Interactive Model-Centric Systems Engineering (IMCSE) Phase 1
2014-09-30
and supporting infrastructure ...testing. 4. Supporting MPTs. During Phase 1, the opportunity to develop several MPTs to support IMCSE arose, including supporting infrastructure ...Analysis will be completed and tested with a case application, along with preliminary supporting infrastructure , which will then be used to inform the
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
... Customs Automation Program Test (NCAP) Regarding Reconciliation for Filing Certain Post-Importation Claims... Automation Program (NCAP) Reconciliation prototype test to include the filing of post-importation [[Page... notices. DATES: The test is modified to allow Reconciliation of post-importation preferential tariff...
21 CFR 864.9300 - Automated Coombs test systems.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...
21 CFR 864.9300 - Automated Coombs test systems.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...
21 CFR 864.9300 - Automated Coombs test systems.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...
21 CFR 864.9300 - Automated Coombs test systems.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...
21 CFR 864.9300 - Automated Coombs test systems.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...
Surface Development and Test Facility (SDTF) New R&D Simulator for Airport Operations
NASA Technical Reports Server (NTRS)
Dorighi, Nancy S.
1997-01-01
A new simulator, the Surface Development and Test Facility (SDTF) is under construction at the NASA Ames Research Center in Mountain View, California. Jointly funded by the FAA (Federal Aviation Administration) and NASA, the SDTF will be a testbed for airport surface automation technologies of the future. The SDTF will be operational in the third quarter of 1998. The SDTF will combine a virtual tower with simulated ground operations to allow evaluation of new technologies for safety, effectiveness, reliability, and cost benefit. The full-scale level V tower will provide a seamless 360 degree high resolution out-the-window view, and a full complement of ATC (air traffic control) controller positions. The imaging system will be generated by two fully-configured Silicon Graphics Onyx Infinite Reality computers, and will support surface movement of up to 200 aircraft and ground vehicles. The controller positions, displays and consoles can be completely reconfigured to match the unique layout of any individual airport tower. Dedicated areas will accommodate pseudo-airport ramp controllers, pseudo-airport operators, and pseudo-pilots. Up to 33 total personnel positions will be able to participate in simultaneous operational scenarios. A realistic voice communication infrastructure will emulate the intercom and telephone communications of a real airport tower. Multi-channel audio and video recording and a sophisticated data acquisition system will support a wide variety of research and development areas, such as evaluation of automation tools for surface operations, human factors studies, integration of terminal area and airport technologies, and studies of potential airport physical and procedural modifications.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
NASA Astrophysics Data System (ADS)
Wang, Qianlu
2017-10-01
Urban infrastructure and urbanization influence each other, and quantitative analysis of the relationship between them will play a significant role in promoting the social development. The paper based on the data of infrastructure and the proportion of urban population in Shanghai from 1988 to 2013, use the econometric analysis of co-integration test, error correction model and Granger causality test method, and empirically analyze the relationship between Shanghai's infrastructure and urbanization. The results show that: 1) Shanghai Urban infrastructure has a positive effect for the development of urbanization and narrowing the population gap; 2) when the short-term fluctuations deviate from long-term equilibrium, the system will pull the non-equilibrium state back to equilibrium with an adjust intensity 0.342670. And hospital infrastructure is not only an important variable for urban development in short-term, but also a leading infrastructure in the process of urbanization in Shanghai; 3) there has Granger causality between road infrastructure and urbanization; and there is no Granger causality between water infrastructure and urbanization, hospital and school infrastructures of social infrastructure have unidirectional Granger causality with urbanization.
Software Quality Control at Belle II
NASA Astrophysics Data System (ADS)
Ritter, M.; Kuhr, T.; Hauth, T.; Gebard, T.; Kristof, M.; Pulvermacher, C.;
2017-10-01
Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.
Semi-Automated Diagnosis, Repair, and Rework of Spacecraft Electronics
NASA Technical Reports Server (NTRS)
Struk, Peter M.; Oeftering, Richard C.; Easton, John W.; Anderson, Eric E.
2008-01-01
NASA's Constellation Program for Exploration of the Moon and Mars places human crews in extreme isolation in resource scarce environments. Near Earth, the discontinuation of Space Shuttle flights after 2010 will alter the up- and down-mass capacity for the International Space Station (ISS). NASA is considering new options for logistics support strategies for future missions. Aerospace systems are often composed of replaceable modular blocks that minimize the need for complex service operations in the field. Such a strategy however, implies a robust and responsive logistics infrastructure with relatively low transportation costs. The modular Orbital Replacement Units (ORU) used for ISS requires relatively large blocks of replacement hardware even though the actual failed component may really be three orders of magnitude smaller. The ability to perform in-situ repair of electronics circuits at the component level can dramatically reduce the scale of spares and related logistics cost. This ability also reduces mission risk, increases crew independence and improves the overall supportability of the program. The Component-Level Electronics Assembly Repair (CLEAR) task under the NASA Supportability program was established to demonstrate the practicality of repair by first investigating widely used soldering materials and processes (M&P) performed by modest manual means. The work will result in program guidelines for performing manual repairs along with design guidance for circuit reparability. The next phase of CLEAR recognizes that manual repair has its limitations and some highly integrated devices are extremely difficult to handle and demand semi-automated equipment. Further, electronics repairs require a broad range of diagnostic capability to isolate the faulty components. Finally repairs must pass functional tests to determine that the repairs are successful and the circuit can be returned to service. To prevent equipment demands from exceeding spacecraft volume capacity and skill demands from exceeding crew time and training limits, the CLEAR project is examining options provided by non-real time tele-operations, robotics, and a new generation of diagnostic equipment. This paper outlines a strategy to create an effective repair environment where, with the support of ground based engineers, crewmembers can diagnose, repair and test flight electronics in-situ. This paper also discusses the implications of successful tele-robotic repairs when expanded to rework and reconfiguration of used flight assets for building Constellation infrastructure elements.
Automating Deep Space Network scheduling and conflict resolution
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Clement, Bradley
2005-01-01
The Deep Space Network (DSN) is a central part of NASA's infrastructure for communicating with active space missions, from earth orbit to beyond the solar system. We describe our recent work in modeling the complexities of user requirements, and then scheduling and resolving conflicts on that basis. We emphasize our innovative use of background 'intelligent' assistants' that carry out search asynchrnously while the user is focusing on various aspects of the schedule.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image System (DIS) and Simplified Entry (SE); Correction AGENCY: U.S. Customs and Border Protection, Department...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Time and location data play a very significant role in a variety of factory automation scenarios, such as automated vehicles and robots, their navigation, tracking, and monitoring, to services of optimization and security. In addition, pervasive wireless capabilities combined with time and location information are enabling new applications in areas such as transportation systems, health care, elder care, military, emergency response, critical infrastructure, and law enforcement. A person/object in proximity to certain areas for specific durations of time may pose a risk hazard either to themselves, others, or the environment. This paper presents a novel fuzzy based spatio-temporal risk calculationmore » DSTiPE method that an object with wireless communications presents to the environment. The presented Matlab based application for fuzzy spatio-temporal risk cluster extraction is verified on a diagonal vehicle movement example.« less
MPEG-7-based description infrastructure for an audiovisual content analysis and retrieval system
NASA Astrophysics Data System (ADS)
Bailer, Werner; Schallauer, Peter; Hausenblas, Michael; Thallinger, Georg
2005-01-01
We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model. The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints. We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.
NASA Astrophysics Data System (ADS)
Wang, P.; Huang, C.
2017-12-01
The three-dimensional (3D) structure of buildings and infrastructures is fundamental to understanding and modelling of the impacts and challenges of urbanization in terms of energy use, carbon emissions, and earthquake vulnerabilities. However, spatially detailed maps of urban 3D structure have been scarce, particularly in fast-changing developing countries. We present here a novel methodology to map the volume of buildings and infrastructures at 30 meter resolution using a synergy of Landsat imagery and openly available global digital surface models (DSMs), including the Shuttle Radar Topography Mission (SRTM), ASTER Global Digital Elevation Map (GDEM), ALOS World 3D - 30m (AW3D30), and the recently released global DSM from the TanDEM-X mission. Our method builds on the concept of object-based height profile to extract height metrics from the DSMs and use a machine learning algorithm to predict height and volume from the height metrics. We have tested this algorithm in the entire England and assessed our result using Lidar measurements in 25 England cities. Our initial assessments achieved a RMSE of 1.4 m (R2 = 0.72) for building height and a RMSE of 1208.7 m3 (R2 = 0.69) for building volume, demonstrating the potential of large-scale applications and fully automated mapping of urban structure.
Automated Source-Code-Based Testing of Object-Oriented Software
NASA Astrophysics Data System (ADS)
Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten
2014-08-01
With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.
NASA Astrophysics Data System (ADS)
Hasan, M.; Helal, A.; Gabr, M.
2014-12-01
In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.
Otero, Carles; Aldaba, Mikel; López, Silvia; Díaz-Doutón, Fernando; Vera-Díaz, Fuensanta A; Pujol, Jaume
2018-06-01
To study the accommodative dynamics for predictable and unpredictable stimuli using manual and automated accommodative facility tests Materials and Methods: Seventeen young healthy subjects were tested monocularly in two consecutive sessions, using five different conditions. Two conditions replicated the conventional monocular accommodative facility tests for far and near distances, performed with manually held flippers. The other three conditions were automated and conducted using an electro-optical system and open-field autorefractor. Two of the three automated conditions replicated the predictable manual accommodative facility tests. The last automated condition was a hybrid approach using a novel method whereby far and near-accommodative-facility tests were randomly integrated into a single test of four unpredictable accommodative demands. The within-subject standard deviations for far- and near-distance-accommodative reversals were (±1,±1) cycles per minute (cpm) for the manual flipper accommodative facility conditions and (±3, ±4) cpm for the automated conditions. The 95% limits of agreement between the manual and the automated conditions for far and near distances were poor: (-18, 12) and (-15, 3). During the hybrid unpredictable condition, the response time and accommodative response parameters were significantly (p < 0.05) larger for accommodation than disaccommodation responses for high accommodative demands only. The response times during the transitions 0.17/2.17 D and 0.50/4.50 D appeared to be indistinguishable between the hybrid unpredictable and the conventional predictable automated tests. The automated accommodative facility test does not agree with the manual flipper test results. Operator delays in flipping the lens may account for these differences. This novel test, using unpredictable stimuli, provides a more comprehensive examination of accommodative dynamics than conventional manual accommodative facility tests. Unexpectedly, the unpredictability of the stimulus did not to affect accommodation dynamics. Further studies are needed to evaluate the sensitivity of this novel hybrid technique on individuals with accommodative anomalies.
The cobas® 6800/8800 System: a new era of automation in molecular diagnostics.
Cobb, Bryan; Simon, Christian O; Stramer, Susan L; Body, Barbara; Mitchell, P Shawn; Reisch, Natasa; Stevens, Wendy; Carmona, Sergio; Katz, Louis; Will, Stephen; Liesenfeld, Oliver
2017-02-01
Molecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.
Scholz, Stefan; Ngoli, Baltazar; Flessa, Steffen
2015-05-01
Health care infrastructure constitutes a major component of the structural quality of a health system. Infrastructural deficiencies of health services are reported in literature and research. A number of instruments exist for the assessment of infrastructure. However, no easy-to-use instruments to assess health facility infrastructure in developing countries are available. Present tools are not applicable for a rapid assessment by health facility staff. Therefore, health information systems lack data on facility infrastructure. A rapid assessment tool for the infrastructure of primary health care facilities was developed by the authors and pilot-tested in Tanzania. The tool measures the quality of all infrastructural components comprehensively and with high standardization. Ratings use a 2-1-0 scheme which is frequently used in Tanzanian health care services. Infrastructural indicators and indices are obtained from the assessment and serve for reporting and tracing of interventions. The tool was pilot-tested in Tanga Region (Tanzania). The pilot test covered seven primary care facilities in the range between dispensary and district hospital. The assessment encompassed the facilities as entities as well as 42 facility buildings and 80 pieces of technical medical equipment. A full assessment of facility infrastructure was undertaken by health care professionals while the rapid assessment was performed by facility staff. Serious infrastructural deficiencies were revealed. The rapid assessment tool proved a reliable instrument of routine data collection by health facility staff. The authors recommend integrating the rapid assessment tool in the health information systems of developing countries. Health authorities in a decentralized health system are thus enabled to detect infrastructural deficiencies and trace the effects of interventions. The tool can lay the data foundation for district facility infrastructure management.
Point-of-Care Test Equipment for Flexible Laboratory Automation.
You, Won Suk; Park, Jae Jun; Jin, Sung Moon; Ryew, Sung Moo; Choi, Hyouk Ryeol
2014-08-01
Blood tests are some of the core clinical laboratory tests for diagnosing patients. In hospitals, an automated process called total laboratory automation, which relies on a set of sophisticated equipment, is normally adopted for blood tests. Noting that the total laboratory automation system typically requires a large footprint and significant amount of power, slim and easy-to-move blood test equipment is necessary for specific demands such as emergency departments or small-size local clinics. In this article, we present a point-of-care test system that can provide flexibility and portability with low cost. First, the system components, including a reagent tray, dispensing module, microfluidic disk rotor, and photometry scanner, and their functions are explained. Then, a scheduler algorithm to provide a point-of-care test platform with an efficient test schedule to reduce test time is introduced. Finally, the results of diagnostic tests are presented to evaluate the system. © 2014 Society for Laboratory Automation and Screening.
Automated audiometry using apple iOS-based application technology.
Foulad, Allen; Bui, Peggy; Djalilian, Hamid
2013-11-01
The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.
Satellite battery testing status
NASA Astrophysics Data System (ADS)
Haag, R.; Hall, S.
1986-09-01
Because of the large numbers of satellite cells currently being tested and anticipated at the Naval Weapons Support Center (NAVWPNSUPPCEN) Crane, Indiana, satellite cell testing is being integrated into the Battery Test Automation Project (BTAP). The BTAP, designed to meet the growing needs for battery testing at the NAVWPNSUPPCEN Crane, will consist of several Automated Test Stations (ATSs) which monitor batteries under test. Each ATS will interface with an Automation Network Controller (ANC) which will collect test data for reduction.
Satellite battery testing status
NASA Technical Reports Server (NTRS)
Haag, R.; Hall, S.
1986-01-01
Because of the large numbers of satellite cells currently being tested and anticipated at the Naval Weapons Support Center (NAVWPNSUPPCEN) Crane, Indiana, satellite cell testing is being integrated into the Battery Test Automation Project (BTAP). The BTAP, designed to meet the growing needs for battery testing at the NAVWPNSUPPCEN Crane, will consist of several Automated Test Stations (ATSs) which monitor batteries under test. Each ATS will interface with an Automation Network Controller (ANC) which will collect test data for reduction.
Automated Test Case Generation for an Autopilot Requirement Prototype
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael
2011-01-01
Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.
Intelligent Control in Automation Based on Wireless Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
2007-09-01
Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less
Intelligent Control in Automation Based on Wireless Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less
Automatic publishing ISO 19115 metadata with PanMetaDocs using SensorML information
NASA Astrophysics Data System (ADS)
Stender, Vivien; Ulbricht, Damian; Schroeder, Matthias; Klump, Jens
2014-05-01
Terrestrial Environmental Observatories (TERENO) is an interdisciplinary and long-term research project spanning an Earth observation network across Germany. It includes four test sites within Germany from the North German lowlands to the Bavarian Alps and is operated by six research centers of the Helmholtz Association. The contribution by the participating research centers is organized as regional observatories. A challenge for TERENO and its observatories is to integrate all aspects of data management, data workflows, data modeling and visualizations into the design of a monitoring infrastructure. TERENO Northeast is one of the sub-observatories of TERENO and is operated by the German Research Centre for Geosciences (GFZ) in Potsdam. This observatory investigates geoecological processes in the northeastern lowland of Germany by collecting large amounts of environmentally relevant data. The success of long-term projects like TERENO depends on well-organized data management, data exchange between the partners involved and on the availability of the captured data. Data discovery and dissemination are facilitated not only through data portals of the regional TERENO observatories but also through a common spatial data infrastructure TEODOOR (TEreno Online Data repOsitORry). TEODOOR bundles the data, provided by the different web services of the single observatories, and provides tools for data discovery, visualization and data access. The TERENO Northeast data infrastructure integrates data from more than 200 instruments and makes data available through standard web services. Geographic sensor information and services are described using the ISO 19115 metadata schema. TEODOOR accesses the OGC Sensor Web Enablement (SWE) interfaces offered by the regional observatories. In addition to the SWE interface, TERENO Northeast also published data through DataCite. The necessary metadata are created in an automated process by extracting information from the SWE SensorML to create ISO 19115 compliant metadata. The resulting metadata file is stored in the GFZ Potsdam data infrastructure. The publishing workflow for file based research datasets at GFZ Potsdam is based on the eSciDoc infrastructure, using PanMetaDocs (PMD) as the graphical user interface. PMD is a collaborative, metadata based data and information exchange platform [1]. Besides SWE, metadata are also syndicated by PMD through an OAI-PMH interface. In addition, metadata from other observatories, projects or sensors in TERENO can be accessed through the TERENO Northeast data portal. [1] http://meetingorganizer.copernicus.org/EGU2012/EGU2012-7058-2.pdf
The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog).
MacArthur, Jacqueline; Bowler, Emily; Cerezo, Maria; Gil, Laurent; Hall, Peggy; Hastings, Emma; Junkins, Heather; McMahon, Aoife; Milano, Annalisa; Morales, Joannella; Pendlington, Zoe May; Welter, Danielle; Burdett, Tony; Hindorff, Lucia; Flicek, Paul; Cunningham, Fiona; Parkinson, Helen
2017-01-04
The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Connectivity, interoperability and manageability challenges in internet of things
NASA Astrophysics Data System (ADS)
Haseeb, Shariq; Hashim, Aisha Hassan A.; Khalifa, Othman O.; Ismail, Ahmad Faris
2017-09-01
The vision of Internet of Things (IoT) is about interconnectivity between sensors, actuators, people and processes. IoT exploits connectivity between physical objects like fridges, cars, utilities, buildings and cities for enhancing the lives of people through automation and data analytics. However, this sudden increase in connected heterogeneous IoT devices takes a huge toll on the existing Internet infrastructure and introduces new challenges for researchers to embark upon. This paper highlights the effects of heterogeneity challenges on connectivity, interoperability, management in greater details. It also surveys some of the existing solutions adopted in the core network to solve the challenges of massive IoT deployment. The paper finally concludes that IoT architecture and network infrastructure needs to be reengineered ground-up, so that IoT solutions can be safely and efficiently deployed.
Whole genome sequencing in clinical and public health microbiology
Kwong, J. C.; McCallum, N.; Sintchenko, V.; Howden, B. P.
2015-01-01
SummaryGenomics and whole genome sequencing (WGS) have the capacity to greatly enhance knowledge and understanding of infectious diseases and clinical microbiology. The growth and availability of bench-top WGS analysers has facilitated the feasibility of genomics in clinical and public health microbiology. Given current resource and infrastructure limitations, WGS is most applicable to use in public health laboratories, reference laboratories, and hospital infection control-affiliated laboratories. As WGS represents the pinnacle for strain characterisation and epidemiological analyses, it is likely to replace traditional typing methods, resistance gene detection and other sequence-based investigations (e.g., 16S rDNA PCR) in the near future. Although genomic technologies are rapidly evolving, widespread implementation in clinical and public health microbiology laboratories is limited by the need for effective semi-automated pipelines, standardised quality control and data interpretation, bioinformatics expertise, and infrastructure. PMID:25730631
Whole genome sequencing in clinical and public health microbiology.
Kwong, J C; McCallum, N; Sintchenko, V; Howden, B P
2015-04-01
Genomics and whole genome sequencing (WGS) have the capacity to greatly enhance knowledge and understanding of infectious diseases and clinical microbiology.The growth and availability of bench-top WGS analysers has facilitated the feasibility of genomics in clinical and public health microbiology.Given current resource and infrastructure limitations, WGS is most applicable to use in public health laboratories, reference laboratories, and hospital infection control-affiliated laboratories.As WGS represents the pinnacle for strain characterisation and epidemiological analyses, it is likely to replace traditional typing methods, resistance gene detection and other sequence-based investigations (e.g., 16S rDNA PCR) in the near future.Although genomic technologies are rapidly evolving, widespread implementation in clinical and public health microbiology laboratories is limited by the need for effective semi-automated pipelines, standardised quality control and data interpretation, bioinformatics expertise, and infrastructure.
Cybersecurity Intrusion Detection and Monitoring for Field Area Network: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pietrowicz, Stanley
This report summarizes the key technical accomplishments, industry impact and performance of the I2-CEDS grant entitled “Cybersecurity Intrusion Detection and Monitoring for Field Area Network”. Led by Applied Communication Sciences (ACS/Vencore Labs) in conjunction with its utility partner Sacramento Municipal Utility District (SMUD), the project accelerated research on a first-of-its-kind cybersecurity monitoring solution for Advanced Meter Infrastructure and Distribution Automation field networks. It advanced the technology to a validated, full-scale solution that detects anomalies, intrusion events and improves utility situational awareness and visibility. The solution was successfully transitioned and commercialized for production use as SecureSmart™ Continuous Monitoring. Discoveries made withmore » SecureSmart™ Continuous Monitoring led to tangible and demonstrable improvements in the security posture of the US national electric infrastructure.« less
Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-04-13
Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
Comparing New Zealand's 'Middle Out' health information technology strategy with other OECD nations.
Bowden, Tom; Coiera, Enrico
2013-05-01
Implementation of efficient, universally applied, computer to computer communications is a high priority for many national health systems. As a consequence, much effort has been channelled into finding ways in which a patient's previous medical history can be made accessible when needed. A number of countries have attempted to share patients' records, with varying degrees of success. While most efforts to create record-sharing architectures have relied upon government-provided strategy and funding, New Zealand has taken a different approach. Like most British Commonwealth nations, New Zealand has a 'hybrid' publicly/privately funded health system. However its information technology infrastructure and automation has largely been developed by the private sector, working closely with regional and central government agencies. Currently the sector is focused on finding ways in which patient records can be shared amongst providers across three different regions. New Zealand's healthcare IT model combines government contributed funding, core infrastructure, facilitation and leadership with private sector investment and skills and is being delivered via a set of controlled experiments. The net result is a 'Middle Out' approach to healthcare automation. 'Middle Out' relies upon having a clear, well-articulated health-reform strategy and a determination by both public and private sector organisations to implement useful healthcare IT solutions by working closely together. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The BACnet Campus Challenge - Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masica, Ken; Tom, Steve
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
The BACnet Campus Challenge - Part 1
Masica, Ken; Tom, Steve
2015-12-01
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
NASA Astrophysics Data System (ADS)
Vitásek, Stanislav; Matějka, Petr
2017-09-01
The article deals with problematic parts of automated processing of quantity takeoff (QTO) from data generated in BIM model. It focuses on models of road constructions, and uses volumes and dimensions of excavation work to create an estimate of construction costs. The article uses a case study and explorative methods to discuss possibilities and problems of data transfer from a model to a price system of construction production when such transfer is used for price estimates of construction works. Current QTOs and price tenders are made with 2D documents. This process is becoming obsolete because more modern tools can be used. The BIM phenomenon enables partial automation in processing volumes and dimensions of construction units and matching the data to units in a given price scheme. Therefore price of construction can be estimated and structured without lengthy and often imprecise manual calculations. The use of BIM for QTO is highly dependent on local market budgeting systems, therefore proper push/pull strategy is required. It also requires proper requirements specification, compatible pricing database and software.
76 FR 19468 - Amended Certification Regarding Eligibility To Apply for Worker Adjustment Assistance
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... Known As ATW Automation, Inc., Livonia Michigan TA-W-72,075A Assembly & Test Worldwide, Inc., Currently... Saginaw, Michigan locations of Assembly & Test Worldwide, Inc., are currently known as ATW Automation, Inc... Automation, Inc., Livonia, Michigan (TA-W-72,075); Assembly & Test Worldwide, Inc., currently known as ATW...
NASA Astrophysics Data System (ADS)
Lato, M. J.; Frauenfelder, R.; Bühler, Y.
2012-09-01
Snow avalanches in mountainous areas pose a significant threat to infrastructure (roads, railways, energy transmission corridors), personal property (homes) and recreational areas as well as for lives of people living and moving in alpine terrain. The impacts of snow avalanches range from delays and financial loss through road and railway closures, destruction of property and infrastructure, to loss of life. Avalanche warnings today are mainly based on meteorological information, snow pack information, field observations, historically recorded avalanche events as well as experience and expert knowledge. The ability to automatically identify snow avalanches using Very High Resolution (VHR) optical remote sensing imagery has the potential to assist in the development of accurate, spatially widespread, detailed maps of zones prone to avalanches as well as to build up data bases of past avalanche events in poorly accessible regions. This would provide decision makers with improved knowledge of the frequency and size distributions of avalanches in such areas. We used an object-oriented image interpretation approach, which employs segmentation and classification methodologies, to detect recent snow avalanche deposits within VHR panchromatic optical remote sensing imagery. This produces avalanche deposit maps, which can be integrated with other spatial mapping and terrain data. The object-oriented approach has been tested and validated against manually generated maps in which avalanches are visually recognized and digitized. The accuracy (both users and producers) are over 0.9 with errors of commission less than 0.05. Future research is directed to widespread testing of the algorithm on data generated by various sensors and improvement of the algorithm in high noise regions as well as the mapping of avalanche paths alongside their deposits.
An Introduction to Flight Software Development: FSW Today, FSW 2010
NASA Technical Reports Server (NTRS)
Gouvela, John
2004-01-01
Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,
Automatic Generation of Test Oracles - From Pilot Studies to Application
NASA Technical Reports Server (NTRS)
Feather, Martin S.; Smith, Ben
1998-01-01
There is a trend towards the increased use of automation in V&V. Automation can yield savings in time and effort. For critical systems, where thorough V&V is required, these savings can be substantial. We describe a progression from pilot studies to development and use of V&V automation. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V. These studies culminated in the development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing an Al planning system that is a key component of an autonomous spacecraft.
Spaceport Command and Control System Automated Testing
NASA Technical Reports Server (NTRS)
Stein, Meriel
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
Spaceport Command and Control System Automation Testing
NASA Technical Reports Server (NTRS)
Hwang, Andrew
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation
NASA Technical Reports Server (NTRS)
Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat
2011-01-01
The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.
Direct data access protocols benchmarking on DPM
NASA Astrophysics Data System (ADS)
Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina
2015-12-01
The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Burnishing Techniques Strengthen Hip Implants; Signal Processing Methods Monitor Cranial Pressure; Ultraviolet-Blocking Lenses Protect, Enhance Vision; Hyperspectral Systems Increase Imaging Capabilities; Programs Model the Future of Air Traffic Management; Tail Rotor Airfoils Stabilize Helicopters, Reduce Noise; Personal Aircraft Point to the Future of Transportation; Ducted Fan Designs Lead to Potential New Vehicles; Winglets Save Billions of Dollars in Fuel Costs; Sensor Systems Collect Critical Aerodynamics Data; Coatings Extend Life of Engines and Infrastructure; Radiometers Optimize Local Weather Prediction; Energy-Efficient Systems Eliminate Icing Danger for UAVs; Rocket-Powered Parachutes Rescue Entire Planes; Technologies Advance UAVs for Science, Military; Inflatable Antennas Support Emergency Communication; Smart Sensors Assess Structural Health; Hand-Held Devices Detect Explosives and Chemical Agents; Terahertz Tools Advance Imaging for Security, Industry; LED Systems Target Plant Growth; Aerogels Insulate Against Extreme Temperatures; Image Sensors Enhance Camera Technologies; Lightweight Material Patches Allow for Quick Repairs; Nanomaterials Transform Hairstyling Tools; Do-It-Yourself Additives Recharge Auto Air Conditioning; Systems Analyze Water Quality in Real Time; Compact Radiometers Expand Climate Knowledge; Energy Servers Deliver Clean, Affordable Power; Solutions Remediate Contaminated Groundwater; Bacteria Provide Cleanup of Oil Spills, Wastewater; Reflective Coatings Protect People and Animals; Innovative Techniques Simplify Vibration Analysis; Modeling Tools Predict Flow in Fluid Dynamics; Verification Tools Secure Online Shopping, Banking; Toolsets Maintain Health of Complex Systems; Framework Resources Multiply Computing Power; Tools Automate Spacecraft Testing, Operation; GPS Software Packages Deliver Positioning Solutions; Solid-State Recorders Enhance Scientific Data Collection; Computer Models Simulate Fine Particle Dispersion; Composite Sandwich Technologies Lighten Components; Cameras Reveal Elements in the Short Wave Infrared; Deformable Mirrors Correct Optical Distortions; Stitching Techniques Advance Optics Manufacturing; Compact, Robust Chips Integrate Optical Functions; Fuel Cell Stations Automate Processes, Catalyst Testing; Onboard Systems Record Unique Videos of Space Missions; Space Research Results Purify Semiconductor Materials; and Toolkits Control Motion of Complex Robotics.
NASA Technical Reports Server (NTRS)
Tartt, David M.; Hewett, Marle D.; Duke, Eugene L.; Cooper, James A.; Brumbaugh, Randal W.
1989-01-01
The Automated Flight Test Management System (ATMS) is being developed as part of the NASA Aircraft Automation Program. This program focuses on the application of interdisciplinary state-of-the-art technology in artificial intelligence, control theory, and systems methodology to problems of operating and flight testing high-performance aircraft. The development of a Flight Test Engineer's Workstation (FTEWS) is presented, with a detailed description of the system, technical details, and future planned developments. The goal of the FTEWS is to provide flight test engineers and project officers with an automated computer environment for planning, scheduling, and performing flight test programs. The FTEWS system is an outgrowth of the development of ATMS and is an implementation of a component of ATMS on SUN workstations.
Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-01-01
Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922
Technological advances for studying human behavior
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
Technological advances for studying human behavior are noted in viewgraph form. It is asserted that performance-aiding systems are proliferating without a fundamental understanding of how they would interact with the humans who must control them. Two views of automation research, the hardware view and the human-centered view, are listed. Other viewgraphs give information on vital elements for human-centered research, a continuum of the research process, available technologies, new technologies for persistent problems, a sample research infrastructure, the need for metrics, and examples of data-link technology.
Increasing the security at vital infrastructures: automated detection of deviant behaviors
NASA Astrophysics Data System (ADS)
Burghouts, Gertjan J.; den Hollander, Richard; Schutte, Klamer; Marck, Jan-Willem; Landsmeer, Sander; den Breejen, Eric
2011-06-01
This paper discusses the decomposition of hostile intentions into abnormal behaviors. A list of such behaviors has been compiled for the specific case of public transport. Some of the deviant behaviors are hard to observe by people, as they are in the midst of the crowd. Examples are deviant walking patterns, prohibited actions such as taking photos and waiting without taking the train. We discuss our visual analytics algorithms and demonstrate them on CCTV footage from the Amsterdam train station.
Phenomenology tools on cloud infrastructures using OpenStack
NASA Astrophysics Data System (ADS)
Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.
2013-04-01
We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.
Cost efficient command management
NASA Technical Reports Server (NTRS)
Brandt, Theresa; Murphy, C. W.; Kuntz, Jon; Barlett, Tom
1996-01-01
The design and implementation of a command management system (CMS) for a NASA control center, is described. The technology innovations implemented in the CMS provide the infrastructure required for operations cost reduction and future development cost reduction through increased operational efficiency and reuse in future missions. The command management design facilitates error-free operations which enables the automation of the routine control center functions and allows for the distribution of scheduling responsibility to the instrument teams. The reusable system was developed using object oriented methodologies.
Automated processing of endoscopic surgical instruments.
Roth, K; Sieber, J P; Schrimm, H; Heeg, P; Buess, G
1994-10-01
This paper deals with the requirements for automated processing of endoscopic surgical instruments. After a brief analysis of the current problems, solutions are discussed. Test-procedures have been developed to validate the automated processing, so that the cleaning results are guaranteed and reproducable. Also a device for testing and cleaning was designed together with Netzsch Newamatic and PCI, called TC-MIC, to automate processing and reduce manual work.
Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin
2014-01-01
A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.
Northwest Open Automated Demand Response Technology Demonstration Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiliccote, Sila; Piette, Mary Ann; Dudley, Junqiao
The Lawrence Berkeley National Laboratory (LBNL) Demand Response Research Center (DRRC) demonstrated and evaluated open automated demand response (OpenADR) communication infrastructure to reduce winter morning and summer afternoon peak electricity demand in commercial buildings the Seattle area. LBNL performed this demonstration for the Bonneville Power Administration (BPA) in the Seattle City Light (SCL) service territory at five sites: Seattle Municipal Tower, Seattle University, McKinstry, and two Target stores. This report describes the process and results of the demonstration. OpenADR is an information exchange model that uses a client-server architecture to automate demand-response (DR) programs. These field tests evaluated the feasibilitymore » of deploying fully automated DR during both winter and summer peak periods. DR savings were evaluated for several building systems and control strategies. This project studied DR during hot summer afternoons and cold winter mornings, both periods when electricity demand is typically high. This is the DRRC project team's first experience using automation for year-round DR resources and evaluating the flexibility of commercial buildings end-use loads to participate in DR in dual-peaking climates. The lessons learned contribute to understanding end-use loads that are suitable for dispatch at different times of the year. The project was funded by BPA and SCL. BPA is a U.S. Department of Energy agency headquartered in Portland, Oregon and serving the Pacific Northwest. BPA operates an electricity transmission system and markets wholesale electrical power at cost from federal dams, one non-federal nuclear plant, and other non-federal hydroelectric and wind energy generation facilities. Created by the citizens of Seattle in 1902, SCL is the second-largest municipal utility in America. SCL purchases approximately 40% of its electricity and the majority of its transmission from BPA through a preference contract. SCL also provides ancillary services within its own balancing authority. The relationship between BPA and SCL creates a unique opportunity to create DR programs that address both BPA's and SCL's markets simultaneously. Although simultaneously addressing both market could significantly increase the value of DR programs for BPA, SCL, and the end user, establishing program parameters that maximize this value is challenging because of complex contractual arrangements and the absence of a central Independent System Operator or Regional Transmission Organization in the northwest.« less
Péharpré, D; Cliquet, F; Sagné, E; Renders, C; Costy, F; Aubert, M
1999-07-01
The rapid fluorescent focus inhibition test (RFFIT) and the fluorescent antibody virus neutralization test (FAVNT) are both diagnostic tests for determining levels of rabies neutralizing antibodies. An automated method for determining fluorescence has been implemented to reduce the work time required for fluorescent visual microscopic observations. The automated method offers several advantages over conventional visual observation, such as the ability to rapidly test many samples. The antibody titers obtained with automated techniques were similar to those obtained with both the RFFIT (n = 165, r = 0.93, P < 0.001) and the FAVNT (n = 52, r = 0.99, P < 0.001).
[Automated analyzer of enzyme immunoassay].
Osawa, S
1995-09-01
Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.
Automated smartphone audiometry: Validation of a word recognition test app.
Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J
2018-03-01
Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Jarvis, Susan; Moretti, David; Morrissey, Ronald; Dimarzio, Nancy
2003-10-01
The Marine Mammal Monitoring on Navy Ranges (M3R) project has developed a toolset for passive detection and localization of marine mammals using the existing infrastructure of Navy's undersea ranges. The Office of Naval Research funded the M3R project as part of the Navy's effort to determine the effects of acoustic and other emissions on marine mammals and threatened/endangered species. A necessary first step in this effort is the creation of a baseline of behavior, which requires long-term monitoring of marine mammals. Such monitoring, in turn, requires the ability to detect and localize the animals. This paper will present the passive acoustic monitoring and localization tools developed under M3R. It will also present results of the deployment of the M3R tools at the Atlantic Undersea Test and Evaluation Center (AUTEC), Andros Island, Bahamas from June through November 2003. Finally, it will discuss current work to improve automated species classification.
NASA Astrophysics Data System (ADS)
Alessio, F.; Barandela, M. C.; Callot, O.; Duval, P.-Y.; Franek, B.; Frank, M.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Neufeld, N.; Sambade, A.; Schwemmer, R.; Somogyi, P.
2010-04-01
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented
Devine, Emily Beth; Capurro, Daniel; van Eaton, Erik; Alfonso-Cristancho, Rafael; Devlin, Allison; Yanez, N. David; Yetisgen-Yildiz, Meliha; Flum, David R.; Tarczy-Hornoch, Peter
2013-01-01
Background: The field of clinical research informatics includes creation of clinical data repositories (CDRs) used to conduct quality improvement (QI) activities and comparative effectiveness research (CER). Ideally, CDR data are accurately and directly abstracted from disparate electronic health records (EHRs), across diverse health-systems. Objective: Investigators from Washington State’s Surgical Care Outcomes and Assessment Program (SCOAP) Comparative Effectiveness Research Translation Network (CERTAIN) are creating such a CDR. This manuscript describes the automation and validation methods used to create this digital infrastructure. Methods: SCOAP is a QI benchmarking initiative. Data are manually abstracted from EHRs and entered into a data management system. CERTAIN investigators are now deploying Caradigm’s Amalga™ tool to facilitate automated abstraction of data from multiple, disparate EHRs. Concordance is calculated to compare data automatically to manually abstracted. Performance measures are calculated between Amalga and each parent EHR. Validation takes place in repeated loops, with improvements made over time. When automated abstraction reaches the current benchmark for abstraction accuracy - 95% - itwill ‘go-live’ at each site. Progress to Date: A technical analysis was completed at 14 sites. Five sites are contributing; the remaining sites prioritized meeting Meaningful Use criteria. Participating sites are contributing 15–18 unique data feeds, totaling 13 surgical registry use cases. Common feeds are registration, laboratory, transcription/dictation, radiology, and medications. Approximately 50% of 1,320 designated data elements are being automatically abstracted—25% from structured data; 25% from text mining. Conclusion: In semi-automating data abstraction and conducting a rigorous validation, CERTAIN investigators will semi-automate data collection to conduct QI and CER, while advancing the Learning Healthcare System. PMID:25848565
Constructing Aligned Assessments Using Automated Test Construction
ERIC Educational Resources Information Center
Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui
2013-01-01
We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…
An extensible infrastructure for fully automated spike sorting during online experiments.
Santhanam, Gopal; Sahani, Maneesh; Ryu, Stephen; Shenoy, Krishna
2004-01-01
When recording extracellular neural activity, it is often necessary to distinguish action potentials arising from distinct cells near the electrode tip, a process commonly referred to as "spike sorting." In a number of experiments, notably those that involve direct neuroprosthetic control of an effector, this cell-by-cell classification of the incoming signal must be achieved in real time. Several commercial offerings are available for this task, but all of these require some manual supervision per electrode, making each scheme cumbersome with large electrode counts. We present a new infrastructure that leverages existing unsupervised algorithms to sort and subsequently implement the resulting signal classification rules for each electrode using a commercially available Cerebus neural signal processor. We demonstrate an implementation of this infrastructure to classify signals from a cortical electrode array, using a probabilistic clustering algorithm (described elsewhere). The data were collected from a rhesus monkey performing a delayed center-out reach task. We used both sorted and unsorted (thresholded) action potentials from an array implanted in pre-motor cortex to "predict" the reach target, a common decoding operation in neuroprosthetic research. The use of sorted spikes led to an improvement in decoding accuracy of between 3.6 and 6.4%.
Risk assessment for physical and cyber attacks on critical infrastructures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Bryan J.; Sholander, Peter E.; Phelan, James M.
2005-08-01
Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies. Existing risk assessment methodologies consider physical security and cyber security separately. As such, they do not accurately model attacks that involve defeating both physical protection and cyber protection elements (e.g., hackers turning off alarm systems prior to forced entry). This paper presents a risk assessment methodology that accounts for both physical and cyber security. It also preserves the traditional security paradigm of detect, delay and respond, while accounting for the possibility that a facility may be able to recover from or mitigate the results ofmore » a successful attack before serious consequences occur. The methodology provides a means for ranking those assets most at risk from malevolent attacks. Because the methodology is automated the analyst can also play 'what if with mitigation measures to gain a better understanding of how to best expend resources towards securing the facilities. It is simple enough to be applied to large infrastructure facilities without developing highly complicated models. Finally, it is applicable to facilities with extensive security as well as those that are less well-protected.« less
Architecture for Cognitive Networking within NASA's Future Space Communications Infrastructure
NASA Technical Reports Server (NTRS)
Clark, Gilbert; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David
2016-01-01
Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, GEO, MEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes an architecture enabling the development and deployment of cognitive networking capabilities into the envisioned future NASA space communications infrastructure. We begin by discussing the need for increased automation, including inter-system discovery and collaboration. This discussion frames the requirements for an architecture supporting cognitive networking for future missions and relays, including both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture, and results of implementation and initial testing of a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.
NASA Technical Reports Server (NTRS)
Murphy, James R.; Otto, Neil M.
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
NASA Technical Reports Server (NTRS)
Murphy, Jim; Otto, Neil
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
NASA Astrophysics Data System (ADS)
Buck, J. J. H.; Phillips, A.; Lorenzo, A.; Kokkinaki, A.; Hearn, M.; Gardner, T.; Thorne, K.
2017-12-01
The National Oceanography Centre (NOC) operate a fleet of approximately 36 autonomous marine platforms including submarine gliders, autonomous underwater vehicles, and autonomous surface vehicles. Each platform effectivity has the capability to observe the ocean and collect data akin to a small research vessel. This is creating a growth in data volumes and complexity while the amount of resource available to manage data remains static. The OceanIds Command and Control (C2) project aims to solve these issues by fully automating the data archival, processing and dissemination. The data architecture being implemented jointly by NOC and the Scottish Association for Marine Science (SAMS) includes a single Application Programming Interface (API) gateway to handle authentication, forwarding and delivery of both metadata and data. Technicians and principle investigators will enter expedition data prior to deployment of vehicles enabling automated data processing when vehicles are deployed. The system will support automated metadata acquisition from platforms as this technology moves towards operational implementation. The metadata exposure to the web builds on a prototype developed by the European Commission supported SenseOCEAN project and is via open standards including World Wide Web Consortium (W3C) RDF/XML and the use of the Semantic Sensor Network ontology and Open Geospatial Consortium (OGC) SensorML standard. Data will be delivered in the marine domain Everyone's Glider Observatory (EGO) format and OGC Observations and Measurements. Additional formats will be served by implementation of endpoints such as the NOAA ERDDAP tool. This standardised data delivery via the API gateway enables timely near-real-time data to be served to Oceanids users, BODC users, operational users and big data systems. The use of open standards will also enable web interfaces to be rapidly built on the API gateway and delivery to European research infrastructures that include aligned reference models for data infrastructure.
Finding the ’RITE’ Acquisition Environment for Navy C2 Software
2015-05-01
Boiler plate contract language - Gov purpose Rights • Adding expectation of quality to contracting language • Template SOW’s created Pr...Debugger MCCABE IQ Static Analysis Cyclomatic Complexity and KSLOC. All Languages HP Fortify Security Scan STIG and Vulnerabilities Security & IA...GSSAT (GOTS) Security Scan STIG and Vulnerabilities AutoIT Automated Test Scripting Engine for Automation Functional Testing TestComplete Automated
Habash, Marc; Johns, Robert
2009-10-01
This study compared an automated Escherichia coli and coliform detection system with the membrane filtration direct count technique for water testing. The automated instrument performed equal to or better than the membrane filtration test in analyzing E. coli-spiked samples and blind samples with interference from Proteus vulgaris or Aeromonas hydrophila.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
NASA Astrophysics Data System (ADS)
Murray-Krezan, Jeremy; Howard, Samantha; Sabol, Chris; Kim, Richard; Echeverry, Juan
2016-05-01
The Joint Space Operations Center (JSpOC) Mission System (JMS) is a service-oriented architecture (SOA) infrastructure with increased process automation and improved tools to enhance Space Situational Awareness (SSA) performed at the US-led JSpOC. The Advanced Research, Collaboration, and Application Development Environment (ARCADE) is a test-bed maintained and operated by the Air Force to (1) serve as a centralized test-bed for all research and development activities related to JMS applications, including algorithm development, data source exposure, service orchestration, and software services, and provide developers reciprocal access to relevant tools and data to accelerate technology development, (2) allow the JMS program to communicate user capability priorities and requirements to developers, (3) provide the JMS program with access to state-of-the-art research, development, and computing capabilities, and (4) support JMS Program Office-led market research efforts by identifying outstanding performers that are available to shepherd into the formal transition process. In this paper we will share with the international remote sensing community some of the recent JMS and ARCADE developments that may contribute to greater SSA at the JSpOC in the future, and share technical areas still in great need.
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
SONG-China Project: A Global Automated Observation Network
NASA Astrophysics Data System (ADS)
Yang, Z. Z.; Lu, X. M.; Tian, J. F.; Zhuang, C. G.; Wang, K.; Deng, L. C.
2017-09-01
Driven by advancements in technology and scientific objectives, data acquisition in observational astronomy has been changed greatly in recent years. Fully automated or even autonomous ground-based network of telescopes has now become a tendency for time-domain observational projects. The Stellar Observations Network Group (SONG) is an international collaboration with the participation and contribution of the Chinese astronomy community. The scientific goal of SONG is time-domain astrophysics such as asteroseismology and open cluster research. The SONG project aims to build a global network of 1 m telescopes equipped with high-precision and high-resolution spectrographs, and two-channel lucky-imaging cameras. It is the Chinese initiative to install a 50 cm binocular photometry telescope at each SONG node sharing the network platform and infrastructure. This work is focused on design and implementation in technology and methodology of SONG/50BiN, a typical ground-based network composed of multiple sites and a variety of instruments.
Optimal Control of Connected and Automated Vehicles at Roundabouts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Liuhui; Malikopoulos, Andreas; Rios-Torres, Jackeline
Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulationmore » and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.« less
NASA Technical Reports Server (NTRS)
Milligan, James R.; Dutton, James E.
1993-01-01
In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.
NASA Astrophysics Data System (ADS)
Jenness, Tim; Currie, Malcolm J.; Tilanus, Remo P. J.; Cavanagh, Brad; Berry, David S.; Leech, Jamie; Rizzi, Luca
2015-10-01
With the advent of modern multidetector heterodyne instruments that can result in observations generating thousands of spectra per minute it is no longer feasible to reduce these data as individual spectra. We describe the automated data reduction procedure used to generate baselined data cubes from heterodyne data obtained at the James Clerk Maxwell Telescope (JCMT). The system can automatically detect baseline regions in spectra and automatically determine regridding parameters, all without input from a user. Additionally, it can detect and remove spectra suffering from transient interference effects or anomalous baselines. The pipeline is written as a set of recipes using the ORAC-DR pipeline environment with the algorithmic code using Starlink software packages and infrastructure. The algorithms presented here can be applied to other heterodyne array instruments and have been applied to data from historical JCMT heterodyne instrumentation.
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
An overview of suite for automated global electronic biosurveillance (SAGES)
NASA Astrophysics Data System (ADS)
Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.
2012-06-01
Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.
Video Guidance Sensor for Surface Mobility Operations
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Fischer, Richard; Bryan, Thomas; Howell, Joe; Howard, Ricky; Peters, Bruce
2008-01-01
Robotic systems and surface mobility will play an increased role in future exploration missions. Unlike the LRV during Apollo era which was an astronaut piloted vehicle future systems will include teleoperated and semi-autonomous operations. The tasks given to these vehicles will run the range from infrastructure maintenance, ISRU, and construction to name a few. A common task that may be performed would be the retrieval and deployment of trailer mounted equipment. Operational scenarios may require these operations to be performed remotely via a teleoperated mode,or semi-autonomously. This presentation describes the on-going project to adapt the Automated Rendezvous and Capture (AR&C) sensor developed at the Marshall Space Flight Center for use in an automated trailer pick-up and deployment operation. The sensor which has been successfully demonstrated on-orbit has been mounted on an iRobot/John Deere RGATOR autonomous vehicle for this demonstration which will be completed in the March 2008 time-frame.
Armbruster, David A; Overcash, David R; Reyes, Jaime
2014-01-01
The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory, limited only by the imagination and ingenuity of laboratory scientists. PMID:25336760
Armbruster, David A; Overcash, David R; Reyes, Jaime
2014-08-01
The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory, limited only by the imagination and ingenuity of laboratory scientists.
Implementation of Testing Equipment for Asphalt Materials : Tech Summary
DOT National Transportation Integrated Search
2009-05-01
Three new automated methods for related asphalt material and mixture testing were evaluated under this study. Each of these devices is designed to reduce testing time considerably and reduce operator error by automating the testing process. The Thery...
Implementation of testing equipment for asphalt materials : tech summary.
DOT National Transportation Integrated Search
2009-05-01
Three new automated methods for related asphalt material and mixture testing were evaluated : under this study. Each of these devices is designed to reduce testing time considerably and reduce : operator error by automating the testing process. The T...
DAME: planetary-prototype drilling automation.
Glass, B; Cannon, H; Branson, M; Hanagud, S; Paulsen, G
2008-06-01
We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.
DAME: Planetary-Prototype Drilling Automation
NASA Astrophysics Data System (ADS)
Glass, B.; Cannon, H.; Branson, M.; Hanagud, S.; Paulsen, G.
2008-06-01
We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.
Process development for automated solar cell and module production. Task 4: Automated array assembly
NASA Technical Reports Server (NTRS)
Hagerty, J. J.
1981-01-01
Progress in the development of automated solar cell and module production is reported. The unimate robot is programmed for the final 35 cell pattern to be used in the fabrication of the deliverable modules. The mechanical construction of the automated lamination station and final assembly station phases are completed and the first operational testing is underway. The final controlling program is written and optimized. The glass reinforced concrete (GRC) panels to be used for testing and deliverables are in production. Test routines are grouped together and defined to produce the final control program.
Hydrogen Infrastructure Testing and Research Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2017-04-10
Learn about the Hydrogen Infrastructure Testing and Research Facility (HITRF), where NREL researchers are working on vehicle and hydrogen infrastructure projects that aim to enable more rapid inclusion of fuel cell and hydrogen technologies in the market to meet consumer and national goals for emissions reduction, performance, and energy security. As part of NREL’s Energy Systems Integration Facility (ESIF), the HITRF is designed for collaboration with a wide range of hydrogen, fuel cell, and transportation stakeholders.
Performance of the Xpert HIV-1 Viral Load Assay: a Systematic Review and Meta-analysis
Nash, Madlen; Huddart, Sophie; Badar, Sayema; Baliga, Shrikala; Saravu, Kavitha
2018-01-01
ABSTRACT Viral load (VL) is the preferred treatment-monitoring approach for HIV-positive patients. However, more rapid, near-patient, and low-complexity assays are needed to scale up VL testing. The Xpert HIV-1 VL assay (Cepheid, Sunnyvale, CA) is a new, automated molecular test, and it can leverage the GeneXpert systems that are being used widely for tuberculosis diagnosis. We systematically reviewed the evidence on the performance of this new tool in comparison to established reference standards. A total of 12 articles (13 studies) in which HIV patient VLs were compared between Xpert HIV VL assay and a reference standard VL assay were identified. Study quality was generally high, but substantial variability was observed in the number and type of agreement measures reported. Correlation coefficients between Xpert and reference assays were high, with a pooled Pearson correlation (n = 8) of 0.94 (95% confidence interval [CI], 0.89, 0.97) and Spearman correlation (n = 3) of 0.96 (95% CI, 0.86, 0.99). Bland-Altman metrics (n = 11) all were within 0.35 log copies/ml of perfect agreement. Overall, Xpert HIV-1 VL performed well compared to current reference tests. The minimal training and infrastructure requirements for the Xpert HIV-1 VL assay make it attractive for use in resource-constrained settings, where point-of-care VL testing is most needed. PMID:29386266
Performance of the Xpert HIV-1 Viral Load Assay: a Systematic Review and Meta-analysis.
Nash, Madlen; Huddart, Sophie; Badar, Sayema; Baliga, Shrikala; Saravu, Kavitha; Pai, Madhukar
2018-04-01
Viral load (VL) is the preferred treatment-monitoring approach for HIV-positive patients. However, more rapid, near-patient, and low-complexity assays are needed to scale up VL testing. The Xpert HIV-1 VL assay (Cepheid, Sunnyvale, CA) is a new, automated molecular test, and it can leverage the GeneXpert systems that are being used widely for tuberculosis diagnosis. We systematically reviewed the evidence on the performance of this new tool in comparison to established reference standards. A total of 12 articles (13 studies) in which HIV patient VLs were compared between Xpert HIV VL assay and a reference standard VL assay were identified. Study quality was generally high, but substantial variability was observed in the number and type of agreement measures reported. Correlation coefficients between Xpert and reference assays were high, with a pooled Pearson correlation ( n = 8) of 0.94 (95% confidence interval [CI], 0.89, 0.97) and Spearman correlation ( n = 3) of 0.96 (95% CI, 0.86, 0.99). Bland-Altman metrics ( n = 11) all were within 0.35 log copies/ml of perfect agreement. Overall, Xpert HIV-1 VL performed well compared to current reference tests. The minimal training and infrastructure requirements for the Xpert HIV-1 VL assay make it attractive for use in resource-constrained settings, where point-of-care VL testing is most needed. Copyright © 2018 Nash et al.
46 CFR 61.40-3 - Design verification testing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...
Test/score/report: Simulation techniques for automating the test process
NASA Technical Reports Server (NTRS)
Hageman, Barbara H.; Sigman, Clayton B.; Koslosky, John T.
1994-01-01
A Test/Score/Report capability is currently being developed for the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) system which will automate testing of the Goddard Space Flight Center (GSFC) Payload Operations Control Center (POCC) and Mission Operations Center (MOC) software in three areas: telemetry decommutation, spacecraft command processing, and spacecraft memory load and dump processing. Automated computer control of the acceptance test process is one of the primary goals of a test team. With the proper simulation tools and user interface, the task of acceptance testing, regression testing, and repeatability of specific test procedures of a ground data system can be a simpler task. Ideally, the goal for complete automation would be to plug the operational deliverable into the simulator, press the start button, execute the test procedure, accumulate and analyze the data, score the results, and report the results to the test team along with a go/no recommendation to the test team. In practice, this may not be possible because of inadequate test tools, pressures of schedules, limited resources, etc. Most tests are accomplished using a certain degree of automation and test procedures that are labor intensive. This paper discusses some simulation techniques that can improve the automation of the test process. The TASS system tests the POCC/MOC software and provides a score based on the test results. The TASS system displays statistics on the success of the POCC/MOC system processing in each of the three areas as well as event messages pertaining to the Test/Score/Report processing. The TASS system also provides formatted reports documenting each step performed during the tests and the results of each step. A prototype of the Test/Score/Report capability is available and currently being used to test some POCC/MOC software deliveries. When this capability is fully operational it should greatly reduce the time necessary to test a POCC/MOC software delivery, as well as improve the quality of the test process.
The Zadko Telescope: Exploring the Transient Universe
NASA Astrophysics Data System (ADS)
Coward, D. M.; Gendre, B.; Tanga, P.; Turpin, D.; Zadko, J.; Dodson, R.; Devogéle, M.; Howell, E. J.; Kennewell, J. A.; Boër, M.; Klotz, A.; Dornic, D.; Moore, J. A.; Heary, A.
2017-01-01
The Zadko telescope is a 1 m f/4 Cassegrain telescope, situated in the state of Western Australia about 80-km north of Perth. The facility plays a niche role in Australian astronomy, as it is the only meter class facility in Australia dedicated to automated follow-up imaging of alerts or triggers received from different external instruments/detectors spanning the entire electromagnetic spectrum. Furthermore, the location of the facility at a longitude not covered by other meter class facilities provides an important resource for time critical projects. This paper reviews the status of the Zadko facility and science projects since it began robotic operations in March 2010. We report on major upgrades to the infrastructure and equipment (2012-2014) that has resulted in significantly improved robotic operations. Second, we review the core science projects, which include automated rapid follow-up of gamma ray burst (GRB) optical afterglows, imaging of neutrino counterpart candidates from the ANTARES neutrino observatory, photometry of rare (Barbarian) asteroids, supernovae searches in nearby galaxies. Finally, we discuss participation in newly commencing international projects, including the optical follow-up of gravitational wave (GW) candidates from the United States and European GW observatory network and present first tests for very low latency follow-up of fast radio bursts. In the context of these projects, we outline plans for a future upgrade that will optimise the facility for alert triggered imaging from the radio, optical, high-energy, neutrino, and GW bands.
Communicating Earth Observation (EO)-based landslide mapping capabilities to practitioners
NASA Astrophysics Data System (ADS)
Albrecht, Florian; Hölbling, Daniel; Eisank, Clemens; Weinke, Elisabeth; Vecchiotti, Filippo; Kociu, Arben
2016-04-01
Current remote sensing methods and the available Earth Observation (EO) data for landslide mapping already can support practitioners in their processes for gathering and for using landslide information. Information derived from EO data can support emergency services and authorities in rapid mapping after landslide-triggering events, in landslide monitoring and can serve as a relevant basis for hazard and risk mapping. These applications also concern owners, maintainers and insurers of infrastructure. Most often practitioners have a rough overview of the potential and limits of EO-based methods for landslide mapping. However, semi-automated image analysis techniques are still rarely used in practice. This limits the opportunity for user feedback, which would contribute to improve the methods for delivering fully adequate results in terms of accuracy, applicability and reliability. Moreover, practitioners miss information on the best way of integrating the methods in their daily processes. Practitioners require easy-to-grasp interfaces for testing new methods, which in turn would provide researchers with valuable user feedback. We introduce ongoing work towards an innovative web service which will allow for fast and efficient provision of EO-based landslide information products and that supports online processing. We investigate the applicability of various very high resolution (VHR), e.g. WorldView-2/3, Pleiades, and high resolution (HR), e.g. Landsat, Sentinel-2, optical EO data for semi-automated mapping based on object-based image analysis (OBIA). The methods, i.e. knowledge-based and statistical OBIA routines, are evaluated regarding their suitability for inclusion in a web service that is easy to use with the least amount of necessary training. The pre-operational web service will be implemented for selected study areas in the Alps (Austria, Italy), where weather-induced landslides have happened in the past. We will test the service on its usability together with potential users from the Geological Survey of Austria (GBA), various geological services of provinces of Austria, Germany and Italy, the Austrian Service for Torrent and Avalanche Control (WLV), the Austrian Federal Forestry Office (ÖBf), the Austrian Mountaineering Club (ÖAV) and infrastructure owners like the Austrian Road Maintenance Agency (ASFINAG). The results will show how EO-based landslide information products can be made accessible to responsible authorities in an innovative and easy manner and how new analysis methods can be promoted among a broad audience. Thus, the communication and knowledge exchange between researchers, the public, stakeholders and practitioners can be improved.
Exploring the Use of a Test Automation Framework
NASA Technical Reports Server (NTRS)
Cervantes, Alex
2009-01-01
It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.
D'Arcy, Shona; Rapcan, Viliam; Gali, Alessandra; Burke, Nicola; O'Connell, Gloria Crispino; Robertson, Ian H; Reilly, Richard B
2013-01-01
Cognitive assessments are valuable tools in assessing neurological conditions. They are critical in measuring deficits in cognitive function in an array of neurological disorders and during the ageing process. Automation of cognitive assessments is one way to address the increasing burden on medical resources for an ever increasing ageing population. This study investigated the suitability of using automated Interactive Voice Response (IVR) technology to deliver a suite of cognitive assessments to older adults using speech as the input modality. Several clinically valid and gold-standard cognitive assessments were selected for implementation in the IVR application. The IVR application was designed using human centred design principles to ensure the experience was as user friendly as possible. Sixty one participants completed two IVR assessments and one face to face (FF) assessment with a neuropsychologist. Completion rates for individual tests were inspected to identify those tests that are most suitable for administration via IVR technology. Interclass correlations were calculated to assess the reliability of the automated administration of the cognitive assessments across delivery modes. While all participants successfully completed all automated assessments, variability in the completion rates for different cognitive tests was observed. Statistical analysis found significant interclass correlations for certain cognitive tests between the different modes of administration. Analysis also suggests that an initial FF assessment reduces the variability in cognitive test scores when introducing automation into such an assessment. [corrected] This study has demonstrated the functional and cognitive reliability of administering specific cognitive tests using an automated, speech driven application. This study has defined the characteristics of existing cognitive tests that are suitable for such an automated delivery system and also informs on the limitations of other cognitive tests for this modality. This study presents recommendations for developing future large scale cognitive assessments.
Measuring infrastructure: A key step in program evaluation and planning
Schmitt, Carol L.; Glasgow, LaShawn; Lavinghouze, S. Rene; Rieker, Patricia P.; Fulmer, Erika; McAleer, Kelly; Rogers, Todd
2016-01-01
State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General’s call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model’s utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. PMID:27037655
Measuring infrastructure: A key step in program evaluation and planning.
Schmitt, Carol L; Glasgow, LaShawn; Lavinghouze, S Rene; Rieker, Patricia P; Fulmer, Erika; McAleer, Kelly; Rogers, Todd
2016-06-01
State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General's call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model's utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Possible Approach for Addressing Neglected Human Factors Issues of Systems Engineering
NASA Technical Reports Server (NTRS)
Johnson, Christopher W.; Holloway, C. Michael
2011-01-01
The increasing complexity of safety-critical applications has led to the introduction of decision support tools in the transportation and process industries. Automation has also been introduced to support operator intervention in safety-critical applications. These innovations help reduce overall operator workload, and filter application data to maximize the finite cognitive and perceptual resources of system operators. However, these benefits do not come without a cost. Increased computational support for the end-users of safety-critical applications leads to increased reliance on engineers to monitor and maintain automated systems and decision support tools. This paper argues that by focussing on the end-users of complex applications, previous research has tended to neglect the demands that are being placed on systems engineers. The argument is illustrated through discussing three recent accidents. The paper concludes by presenting a possible strategy for building and using highly automated systems based on increased attention by management and regulators, improvements in competency and training for technical staff, sustained support for engineering team resource management, and the development of incident reporting systems for infrastructure failures. This paper represents preliminary work, about which we seek comments and suggestions.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
An ethernet/IP security review with intrusion detection applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laughter, S. A.; Williams, R. D.
2006-07-01
Supervisory Control and Data Acquisition (SCADA) and automation networks, used throughout utility and manufacturing applications, have their own specific set of operational and security requirements when compared to corporate networks. The modern climate of heightened national security and awareness of terrorist threats has made the security of these systems of prime concern. There is a need to understand the vulnerabilities of these systems and how to monitor and protect them. Ethernet/IP is a member of a family of protocols based on the Control and Information Protocol (CIP). Ethernet/IP allows automation systems to be utilized on and integrated with traditional TCP/IPmore » networks, facilitating integration of these networks with corporate systems and even the Internet. A review of the CIP protocol and the additions Ethernet/IP makes to it has been done to reveal the kind of attacks made possible through the protocol. A set of rules for the SNORT Intrusion Detection software is developed based on the results of the security review. These can be used to monitor, and possibly actively protect, a SCADA or automation network that utilizes Ethernet/IP in its infrastructure. (authors)« less
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
Skyalert: a Platform for Event Understanding and Dissemination
NASA Astrophysics Data System (ADS)
Williams, Roy; Drake, A. J.; Djorgovski, S. G.; Donalek, C.; Graham, M. J.; Mahabal, A.
2010-01-01
Skyalert.org is an event repository, web interface, and event-oriented workflow architecture that can be used in many different ways for handling astronomical events that are encoded as VOEvent. It can be used as a remote application (events in the cloud) or installed locally. Some applications are: Dissemination of events with sophisticated discrimination (trigger), using email, instant message, RSS, twitter, etc; Authoring interface for survey-generated events, follow-up observations, and other event types; event streams can be put into the skyalert.org repository, either public or private, or into a local inbstallation of Skyalert; Event-driven software components to fetch archival data, for data-mining and classification of events; human interface to events though wiki, comments, and circulars; use of the "notices and circulars" model, where machines make the notices in real time and people write the interpretation later; Building trusted, automated decisions for automated follow-up observation, and the information infrastructure for automated follow-up with DC3 and HTN telescope schedulers; Citizen science projects such as artifact detection and classification; Query capability for past events, including correlations between different streams and correlations with existing source catalogs; Event metadata structures and connection to the global registry of the virtual observatory.
Thomas, Marianna S; Newman, David; Leinhard, Olof Dahlqvist; Kasmai, Bahman; Greenwood, Richard; Malcolm, Paul N; Karlsson, Anette; Rosander, Johannes; Borga, Magnus; Toms, Andoni P
2014-09-01
To measure the test-retest reproducibility of an automated system for quantifying whole body and compartmental muscle volumes using wide bore 3 T MRI. Thirty volunteers stratified by body mass index underwent whole body 3 T MRI, two-point Dixon sequences, on two separate occasions. Water-fat separation was performed, with automated segmentation of whole body, torso, upper and lower leg volumes, and manually segmented lower leg muscle volumes. Mean automated total body muscle volume was 19·32 L (SD9·1) and 19·28 L (SD9·12) for first and second acquisitions (Intraclass correlation coefficient (ICC) = 1·0, 95% level of agreement -0·32-0·2 L). ICC for all automated test-retest muscle volumes were almost perfect (0·99-1·0) with 95% levels of agreement 1.8-6.6% of mean volume. Automated muscle volume measurements correlate closely with manual quantification (right lower leg: manual 1·68 L (2SD0·6) compared to automated 1·64 L (2SD 0·6), left lower leg: manual 1·69 L (2SD 0·64) compared to automated 1·63 L (SD0·61), correlation coefficients for automated and manual segmentation were 0·94-0·96). Fully automated whole body and compartmental muscle volume quantification can be achieved rapidly on a 3 T wide bore system with very low margins of error, excellent test-retest reliability and excellent correlation to manual segmentation in the lower leg. Sarcopaenia is an important reversible complication of a number of diseases. Manual quantification of muscle volume is time-consuming and expensive. Muscles can be imaged using in and out of phase MRI. Automated atlas-based segmentation can identify muscle groups. Automated muscle volume segmentation is reproducible and can replace manual measurements.
Automation of testing modules of controller ELSY-ТМК
NASA Astrophysics Data System (ADS)
Dolotov, A. E.; Dolotova, R. G.; Petuhov, D. V.; Potapova, A. P.
2017-01-01
In modern life, there are means for automation of various processes which allow one to provide high quality standards of released products and to raise labour efficiency. In the given paper, the data on the automation of the test process of the ELSY-TMK controller [1] is presented. The ELSY-TMK programmed logic controller is an effective modular platform for construction of automation systems for small and average branches of industrial production. The modern and functional standard of communication and open environment of the logic controller give a powerful tool of wide spectrum applications for industrial automation. The algorithm allows one to test controller modules by operating the switching system and external devices faster and at a higher level of quality than a human without such means does.
Next generation terminology infrastructure to support interprofessional care planning.
Collins, Sarah; Klinkenberg-Ramirez, Stephanie; Tsivkin, Kira; Mar, Perry L; Iskhakova, Dina; Nandigam, Hari; Samal, Lipika; Rocha, Roberto A
2017-11-01
Develop a prototype of an interprofessional terminology and information model infrastructure that can enable care planning applications to facilitate patient-centered care, learn care plan linkages and associations, provide decision support, and enable automated, prospective analytics. The study steps included a 3 step approach: (1) Process model and clinical scenario development, and (2) Requirements analysis, and (3) Development and validation of information and terminology models. Components of the terminology model include: Health Concerns, Goals, Decisions, Interventions, Assessments, and Evaluations. A terminology infrastructure should: (A) Include discrete care plan concepts; (B) Include sets of profession-specific concerns, decisions, and interventions; (C) Communicate rationales, anticipatory guidance, and guidelines that inform decisions among the care team; (D) Define semantic linkages across clinical events and professions; (E) Define sets of shared patient goals and sub-goals, including patient stated goals; (F) Capture evaluation toward achievement of goals. These requirements were mapped to AHRQ Care Coordination Measures Framework. This study used a constrained set of clinician-validated clinical scenarios. Terminology models for goals and decisions are unavailable in SNOMED CT, limiting the ability to evaluate these aspects of the proposed infrastructure. Defining and linking subsets of care planning concepts appears to be feasible, but also essential to model interprofessional care planning for common co-occurring conditions and chronic diseases. We recommend the creation of goal dynamics and decision concepts in SNOMED CT to further enable the necessary models. Systems with flexible terminology management infrastructure may enable intelligent decision support to identify conflicting and aligned concerns, goals, decisions, and interventions in shared care plans, ultimately decreasing documentation effort and cognitive burden for clinicians and patients. Copyright © 2017 Elsevier Inc. All rights reserved.
46 CFR 61.40-3 - Design verification testing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Design verification testing. 61.40-3 Section 61.40-3... INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...
Variability of the QuantiFERON®-TB gold in-tube test using automated and manual methods.
Whitworth, William C; Goodwin, Donald J; Racster, Laura; West, Kevin B; Chuke, Stella O; Daniels, Laura J; Campbell, Brandon H; Bohanon, Jamaria; Jaffar, Atheer T; Drane, Wanzer; Sjoberg, Paul A; Mazurek, Gerald H
2014-01-01
The QuantiFERON®-TB Gold In-Tube test (QFT-GIT) detects Mycobacterium tuberculosis (Mtb) infection by measuring release of interferon gamma (IFN-γ) when T-cells (in heparinized whole blood) are stimulated with specific Mtb antigens. The amount of IFN-γ is determined by enzyme-linked immunosorbent assay (ELISA). Automation of the ELISA method may reduce variability. To assess the impact of ELISA automation, we compared QFT-GIT results and variability when ELISAs were performed manually and with automation. Blood was collected into two sets of QFT-GIT tubes and processed at the same time. For each set, IFN-γ was measured in automated and manual ELISAs. Variability in interpretations and IFN-γ measurements was assessed between automated (A1 vs. A2) and manual (M1 vs. M2) ELISAs. Variability in IFN-γ measurements was also assessed on separate groups stratified by the mean of the four ELISAs. Subjects (N = 146) had two automated and two manual ELISAs completed. Overall, interpretations were discordant for 16 (11%) subjects. Excluding one subject with indeterminate results, 7 (4.8%) subjects had discordant automated interpretations and 10 (6.9%) subjects had discordant manual interpretations (p = 0.17). Quantitative variability was not uniform; within-subject variability was greater with higher IFN-γ measurements and with manual ELISAs. For subjects with mean TB Responses ±0.25 IU/mL of the 0.35 IU/mL cutoff, the within-subject standard deviation for two manual tests was 0.27 (CI95 = 0.22-0.37) IU/mL vs. 0.09 (CI95 = 0.07-0.12) IU/mL for two automated tests. QFT-GIT ELISA automation may reduce variability near the test cutoff. Methodological differences should be considered when interpreting and using IFN-γ release assays (IGRAs).
Python Scripts for Automation of Current-Voltage Testing of Semiconductor Devices (FY17)
2017-01-01
ARL-TR-7923 ● JAN 2017 US Army Research Laboratory Python Scripts for Automation of Current- Voltage Testing of Semiconductor...manual device-testing procedures is reduced or eliminated through automation. This technical report includes scripts written in Python , version 2.7, used ...nothing. 3.1.9 Exit Program The script exits the entire program. Line 505, sys.exit(), uses the sys package that comes with Python to exit system
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
Implementation of and experiences with new automation
Mahmud, Ifte; Kim, David
2000-01-01
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at ‘get-go’, we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products. PMID:18924695
Implementation of and experiences with new automation.
Mahmud, I; Kim, D
2000-01-01
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at 'get-go', we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products.
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
An Architecture for SCADA Network Forensics
NASA Astrophysics Data System (ADS)
Kilpatrick, Tim; Gonzalez, Jesus; Chandia, Rodrigo; Papa, Mauricio; Shenoi, Sujeet
Supervisory control and data acquisition (SCADA) systems are widely used in industrial control and automation. Modern SCADA protocols often employ TCP/IP to transport sensor data and control signals. Meanwhile, corporate IT infrastructures are interconnecting with previously isolated SCADA networks. The use of TCP/IP as a carrier protocol and the interconnection of IT and SCADA networks raise serious security issues. This paper describes an architecture for SCADA network forensics. In addition to supporting forensic investigations of SCADA network incidents, the architecture incorporates mechanisms for monitoring process behavior, analyzing trends and optimizing plant performance.
2016-10-27
Domain C2, Adaptive Domain Control, Global Integrated ISR, Rapid Global Mobility , and Global Precision Strike, orgnanized within a framework of...mission needs. (Among the dozen implications) A more transparent, networked infrastructure that integrates ubiquitous sensors, automated systems...Conclusion 5.1 Common Technical Trajectory One of the most significant opportunities for AFRL is to develop and mobilize the qualitative roadmap
2002-03-22
may be derived from detailed inspection of the IC itself or from illicit appropriation of design information. Counterfeit smart cards can be mass...Infrastructure (PKI) as the Internet to securely and privately exchange data and money through the use of a public and a private cryptographic key pair...interference devices (SQDIS), electrical testing, and electron beam testing. • Other attacks, such as UV or X-rays or high temperatures, could cause erasure
Requirements for Flight Testing Automated Terminal Service
DOT National Transportation Integrated Search
1977-05-01
This report describes requirements for the flight tests of the baseline Automated Terminals Service (ATS) system. The overall objective of the flight test program is to evaluate the feasibility of the ATS concept. Within this objective there are two ...
Zanatta, Lucia; Valori, Laura; Cappelletto, Eleonora; Pozzebon, Maria Elena; Pavan, Elisabetta; Dei Tos, Angelo Paolo; Merkle, Dennis
2015-02-01
In the modern molecular diagnostic laboratory, cost considerations are of paramount importance. Automation of complex molecular assays not only allows a laboratory to accommodate higher test volumes and throughput but also has a considerable impact on the cost of testing from the perspective of reagent costs, as well as hands-on time for skilled laboratory personnel. The following study tracked the cost of labor (hands-on time) and reagents for fluorescence in situ hybridization (FISH) testing in a routine, high-volume pathology and cytogenetics laboratory in Treviso, Italy, over a 2-y period (2011-2013). The laboratory automated FISH testing with the VP 2000 Processor, a deparaffinization, pretreatment, and special staining instrument produced by Abbott Molecular, and compared hands-on time and reagent costs to manual FISH testing. The results indicated significant cost and time saving when automating FISH with VP 2000 when more than six FISH tests were run per week. At 12 FISH assays per week, an approximate total cost reduction of 55% was observed. When running 46 FISH specimens per week, the cost saving increased to 89% versus manual testing. The results demonstrate that the VP 2000 processor can significantly reduce the cost of FISH testing in diagnostic laboratories. © 2014 Society for Laboratory Automation and Screening.
Automation of Space Station module power management and distribution system
NASA Technical Reports Server (NTRS)
Bechtel, Robert; Weeks, Dave; Walls, Bryan
1990-01-01
Viewgraphs on automation of space station module (SSM) power management and distribution (PMAD) system are presented. Topics covered include: reasons for power system automation; SSM/PMAD approach to automation; SSM/PMAD test bed; SSM/PMAD topology; functional partitioning; SSM/PMAD control; rack level autonomy; FRAMES AI system; and future technology needs for power system automation.
DOT National Transportation Integrated Search
2009-05-01
In 2005, the US Department of Transportation (DOT) initiated a program to develop and test a 5.9GHzbased : Vehicle Infrastructure Integration (VII) proof of concept (POC). The POC was implemented in the northwest : suburbs of Detroit, Michigan. Th...
Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel
2012-12-01
Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).
Google glass based immunochromatographic diagnostic test analysis
NASA Astrophysics Data System (ADS)
Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan
2015-03-01
Integration of optical imagers and sensors into recently emerging wearable computational devices allows for simpler and more intuitive methods of integrating biomedical imaging and medical diagnostics tasks into existing infrastructures. Here we demonstrate the ability of one such device, the Google Glass, to perform qualitative and quantitative analysis of immunochromatographic rapid diagnostic tests (RDTs) using a voice-commandable hands-free software-only interface, as an alternative to larger and more bulky desktop or handheld units. Using the built-in camera of Glass to image one or more RDTs (labeled with Quick Response (QR) codes), our Glass software application uploads the captured image and related information (e.g., user name, GPS, etc.) to our servers for remote analysis and storage. After digital analysis of the RDT images, the results are transmitted back to the originating Glass device, and made available through a website in geospatial and tabular representations. We tested this system on qualitative human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) RDTs. For qualitative HIV tests, we demonstrate successful detection and labeling (i.e., yes/no decisions) for up to 6-fold dilution of HIV samples. For quantitative measurements, we activated and imaged PSA concentrations ranging from 0 to 200 ng/mL and generated calibration curves relating the RDT line intensity values to PSA concentration. By providing automated digitization of both qualitative and quantitative test results, this wearable colorimetric diagnostic test reader platform on Google Glass can reduce operator errors caused by poor training, provide real-time spatiotemporal mapping of test results, and assist with remote monitoring of various biomedical conditions.
46 CFR 130.480 - Test procedure and operations manual.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Test procedure and operations manual. 130.480 Section... VESSEL CONTROL, AND MISCELLANEOUS EQUIPMENT AND SYSTEMS Automation of Unattended Machinery Spaces § 130.480 Test procedure and operations manual. (a) A procedure for tests to be conducted on automated...
Test oracle automation for V&V of an autonomous spacecraft's planner
NASA Technical Reports Server (NTRS)
Feather, M. S.; Smith, B.
2001-01-01
We built automation to assist the software testing efforts associated with the Remote Agent experiment. In particular, our focus was upon introducing test oracles into the testing of the planning and scheduling system component. This summary is intended to provide an overview of the work.
ERIC Educational Resources Information Center
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel
2015-01-01
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Exponential error reduction in pretransfusion testing with automation.
South, Susan F; Casina, Tony S; Li, Lily
2012-08-01
Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.
Automated spot defect characterization in a field portable night vision goggle test set
NASA Astrophysics Data System (ADS)
Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume
2018-05-01
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
Lohmann, Amanda R; Carlson, Matthew L; Sladen, Douglas P
2018-03-01
Intraoperative cochlear implant device testing provides valuable information regarding device integrity, electrode position, and may assist with determining initial stimulation settings. Manual intraoperative device testing during cochlear implantation requires the time and expertise of a trained audiologist. The purpose of the current study is to investigate the feasibility of using automated remote intraoperative cochlear implant reverse telemetry testing as an alternative to standard testing. Prospective pilot study evaluating intraoperative remote automated impedance and Automatic Neural Response Telemetry (AutoNRT) testing in 34 consecutive cochlear implant surgeries using the Intraoperative Remote Assistant (Cochlear Nucleus CR120). In all cases, remote intraoperative device testing was performed by trained operating room staff. A comparison was made to the "gold standard" of manual testing by an experienced cochlear implant audiologist. Electrode position and absence of tip fold-over was confirmed using plain film x-ray. Automated remote reverse telemetry testing was successfully completed in all patients. Intraoperative x-ray demonstrated normal electrode position without tip fold-over. Average impedance values were significantly higher using standard testing versus CR120 remote testing (standard mean 10.7 kΩ, SD 1.2 vs. CR120 mean 7.5 kΩ, SD 0.7, p < 0.001). There was strong agreement between standard manual testing and remote automated testing with regard to the presence of open or short circuits along the array. There were, however, two cases in which standard testing identified an open circuit, when CR120 testing showed the circuit to be closed. Neural responses were successfully obtained in all patients using both systems. There was no difference in basal electrode responses (standard mean 195.0 μV, SD 14.10 vs. CR120 194.5 μV, SD 14.23; p = 0.7814); however, more favorable (lower μV amplitude) results were obtained with the remote automated system in the apical 10 electrodes (standard 185.4 μV, SD 11.69 vs. CR120 177.0 μV, SD 11.57; p value < 0.001). These preliminary data demonstrate that intraoperative cochlear implant device testing using a remote automated system is feasible. This system may be useful for cochlear implant programs with limited audiology support or for programs looking to streamline intraoperative device testing protocols. Future studies with larger patient enrollment are required to validate these promising, but preliminary, findings.
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
Upgrading Technology Infrastructure in California's Schools
ERIC Educational Resources Information Center
Gao, Niu; Murphy, Patrick
2016-01-01
As California schools move into online testing and online learning, an adequate technology infrastructure is no longer an option, but a necessity. To fully benefit from digital learning, schools will require a comprehensive technology infrastructure that can support a range of administrative and instructional tools. An earlier PPIC report found…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
..., Vehicle-to-Infrastructure, and Testing programs; along with a special session discussing lessons learned... evolving in terms of a robust Vehicle-to- Infrastructure environment, and identify what we have learned... wireless communication between vehicles, infrastructure, and personal communications devices to [[Page...
Automation of electromagnetic compatability (EMC) test facilities
NASA Technical Reports Server (NTRS)
Harrison, C. A.
1986-01-01
Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.
2016-10-01
testing as well as finite element simulation. Automation and control testing has been completed on a 5x5 array of bubble actuators to verify pressure...mechanical behavior at varying loads and internal pressures both by experimental testing as well as finite element simulation. Automation and control...A finite element (FE) model of the bubble actuator was developed in the commercial software ANSYS in order to determine the deformation of the
The light spot test: Measuring anxiety in mice in an automated home-cage environment.
Aarts, Emmeke; Maroteaux, Gregoire; Loos, Maarten; Koopmans, Bastijn; Kovačević, Jovana; Smit, August B; Verhage, Matthijs; Sluis, Sophie van der
2015-11-01
Behavioral tests of animals in a controlled experimental setting provide a valuable tool to advance understanding of genotype-phenotype relations, and to study the effects of genetic and environmental manipulations. To optimally benefit from the increasing numbers of genetically engineered mice, reliable high-throughput methods for comprehensive behavioral phenotyping of mice lines have become a necessity. Here, we describe the development and validation of an anxiety test, the light spot test, that allows for unsupervised, automated, high-throughput testing of mice in a home-cage system. This automated behavioral test circumvents bias introduced by pretest handling, and enables recording both baseline behavior and the behavioral test response over a prolonged period of time. We demonstrate that the light spot test induces a behavioral response in C57BL/6J mice. This behavior reverts to baseline when the aversive stimulus is switched off, and is blunted by treatment with the anxiolytic drug Diazepam, demonstrating predictive validity of the assay, and indicating that the observed behavioral response has a significant anxiety component. Also, we investigated the effectiveness of the light spot test as part of sequential testing for different behavioral aspects in the home-cage. Two learning tests, administered prior to the light spot test, affected the light spot test parameters. The light spot test is a novel, automated assay for anxiety-related high-throughput testing of mice in an automated home-cage environment, allowing for both comprehensive behavioral phenotyping of mice, and rapid screening of pharmacological compounds. Copyright © 2015 Elsevier B.V. All rights reserved.
Intelligent systems technology infrastructure for integrated systems
NASA Technical Reports Server (NTRS)
Lum, Henry, Jr.
1991-01-01
Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.
High Resolution Sensing and Control of Urban Water Networks
NASA Astrophysics Data System (ADS)
Bartos, M. D.; Wong, B. P.; Kerkez, B.
2016-12-01
We present a framework to enable high-resolution sensing, modeling, and control of urban watersheds using (i) a distributed sensor network based on low-cost cellular-enabled motes, (ii) hydraulic models powered by a cloud computing infrastructure, and (iii) automated actuation valves that allow infrastructure to be controlled in real time. This platform initiates two major advances. First, we achieve a high density of measurements in urban environments, with an anticipated 40+ sensors over each urban area of interest. In addition to new measurements, we also illustrate the design and evaluation of a "smart" control system for real-world hydraulic networks. This control system improves water quality and mitigates flooding by using real-time hydraulic models to adaptively control releases from retention basins. We evaluate the potential of this platform through two ongoing deployments: (i) a flood monitoring network in the Dallas-Fort Worth metropolitan area that detects and anticipates floods at the level of individual roadways, and (ii) a real-time hydraulic control system in the city of Ann Arbor, MI—soon to be one of the most densely instrumented urban watersheds in the United States. Through these applications, we demonstrate that distributed sensing and control of water infrastructure can improve flash flood predictions, emergency response, and stormwater contaminant mitigation.
Apparatus for automated testing of biological specimens
Layne, Scott P.; Beugelsdijk, Tony J.
1999-01-01
An apparatus for performing automated testing of infections biological specimens is disclosed. The apparatus comprise a process controller for translating user commands into test instrument suite commands, and a test instrument suite comprising a means to treat the specimen to manifest an observable result, and a detector for measuring the observable result to generate specimen test results.
The Status and Promise of Advanced M&V: An Overview of “M&V 2.0” Methods, Tools, and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franconi, Ellen; Gee, Matt; Goldberg, Miriam
Advanced measurement and verification (M&V) of energy efficiency savings, often referred to as M&V 2.0 or advanced M&V, is currently an object of much industry attention. Thus far, however, there has been a lack of clarity about what techniques M&V 2.0 includes, how those techniques differ from traditional approaches, what the key considerations are for their use, and what value propositions M&V 2.0 presents to different stakeholders. The objective of this paper is to provide background information and frame key discussion points related to advanced M&V. The paper identifies the benefits, methods, and requirements of advanced M&V and outlines keymore » technical issues for applying these methods. It presents an overview of the distinguishing elements of M&V 2.0 tools and of how the industry is addressing needs for tool testing, consistency, and standardization, and it identifies opportunities for collaboration. In this paper, we consider two key features of M&V 2.0: (1) automated analytics that can provide ongoing, near-real-time savings estimates, and (2) increased data granularity in terms of frequency, volume, or end-use detail. Greater data granularity for large numbers of customers, such as that derived from comprehensive implementation of advanced metering infrastructure (AMI) systems, leads to very large data volumes. This drives interest in automated processing systems. It is worth noting, however, that automated processing can provide value even when applied to less granular data, such as monthly consumption data series. Likewise, more granular data, such as interval or end-use data, delivers value with or without automated processing, provided the processing is manageable. But it is the combination of greater data detail with automated processing that offers the greatest opportunity for value. Using M&V methods that capture load shapes together with automated processing1 can determine savings in near-real time to provide stakeholders with more timely and detailed information. This information can be used to inform ongoing building operations, provide early input on energy efficiency program design, or assess the impact of efficiency by location and time of day. Stakeholders who can make use of such information include regulators, energy efficiency program administrators, program evaluators, contractors and aggregators, building owners, the investment community, and grid planners. Although each stakeholder has its own priorities and challenges related to savings measurement and verification, the potential exists for all to draw from a single set of efficiency valuation data. Such an integrated approach could provide a base consistency across stakeholder uses.« less
Laktabai, Jeremiah; Platt, Alyssa; Menya, Diana; Turner, Elizabeth L; Aswa, Daniel; Kinoti, Stephen; O'Meara, Wendy Prudhomme
2018-01-01
Community health workers (CHWs) play an important role in improving access to services in areas with limited health infrastructure or workforce. Supervision of CHWs by qualified health professionals is the main link between this lay workforce and the formal health system. The quality of services provided by lay health workers is dependent on adequate supportive supervision. It is however one of the weakest links in CHW programs due to logistical and resource constraints, especially in large scale programs. Interventions such as point of care testing using malaria rapid diagnostic tests (RDTs) require real time monitoring to ensure diagnostic accuracy. In this study, we evaluated the utility of a mobile health technology platform to remotely monitor malaria RDT (mRDT) testing by CHWs for quality improvement. As part of a large implementation trial involving mRDT testing by CHWs, we introduced the Fionet system composed of a mobile device (Deki Reader, DR) to assist in processing and automated interpretation of mRDTs, which connects to a cloud-based database which captures reports from the field in real time, displaying results in a custom dashboard of key performance indicators. A random sample of 100 CHWs were trained and provided with the Deki Readers and instructed to use it on 10 successive patients. The CHWs interpretation was compared with the Deki Reader's automatic interpretation, with the errors in processing and interpreting the RDTs recorded. After the CHW entered their interpretation on the DR, the DR provided immediate, automated feedback and interpretation based on its reading of the same cassette. The study team monitored the CHW performance remotely and provided additional support. A total of 1251 primary and 113 repeat tests were performed by the 97 CHWs who used the DR. 91.6% of the tests had agreement between the DR and the CHWs. There were 61 (4.9%) processing and 52 (4.2%) interpretation errors among the primary tests. There was a tendency towards lower odds of errors with increasing number and frequency of tests, though not statistically significant. Of the 62 tests that were repeated due to errors, 79% achieved concordance between the CHW and the DR. Satisfaction with the use of the DR by the CHWs was high. Use of innovative mHealth strategies for monitoring and quality control can ensure quality within a large scale implementation of community level testing by lay health workers.
A test matrix sequencer for research test facility automation
NASA Technical Reports Server (NTRS)
Mccartney, Timothy P.; Emery, Edward F.
1990-01-01
The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.
Rolling Deck to Repository I: Designing a Database Infrastructure
NASA Astrophysics Data System (ADS)
Arko, R. A.; Miller, S. P.; Chandler, C. L.; Ferrini, V. L.; O'Hara, S. H.
2008-12-01
The NSF-supported academic research fleet collectively produces a large and diverse volume of scientific data, which are increasingly being shared across disciplines and contributed to regional and global syntheses. As both Internet connectivity and storage technology improve, it becomes practical for ships to routinely deliver data and documentation for a standard suite of underway instruments to a central shoreside repository. Routine delivery will facilitate data discovery and integration, quality assessment, cruise planning, compliance with funding agency and clearance requirements, and long-term data preservation. We are working collaboratively with ship operators and data managers to develop a prototype "data discovery system" for NSF-supported research vessels. Our goal is to establish infrastructure for a central shoreside repository, and to develop and test procedures for the routine delivery of standard data products and documentation to the repository. Related efforts are underway to identify tools and criteria for quality control of standard data products, and to develop standard interfaces and procedures for maintaining an underway event log. Development of a shoreside repository infrastructure will include: 1. Deployment and testing of a central catalog that holds cruise summaries and vessel profiles. A cruise summary will capture the essential details of a research expedition (operating institution, ports/dates, personnel, data inventory, etc.), as well as related documentation such as event logs and technical reports. A vessel profile will capture the essential details of a ship's installed instruments (manufacturer, model, serial number, reference location, etc.), with version control as the profile changes through time. The catalog's relational database schema will be based on the UNOLS Data Best Practices Committee's recommendations, and published as a formal XML specification. 2. Deployment and testing of a central repository that holds navigation and routine underway data. Based on discussion with ship operators and data managers at a workgroup meeting in September 2008, we anticipate that a subset of underway data could be delivered from ships to the central repository in near- realtime - enabling the integrated display of ship tracks at a public Web portal, for example - and a full data package could be delivered post-cruise by network transfer or disk shipment. Once ashore, data sets could be distributed to assembly centers such as the Shipboard Automated Meteorological and Oceanographic System (SAMOS) for routine processing, quality assessment, and synthesis efforts - as well as transmitted to national data centers such as NODC and NGDC for permanent archival. 3. Deployment and testing of a basic suite of Web services to make cruise summaries, vessel profiles, event logs, and navigation data easily available. A standard set of catalog records, maps, and navigation features will be published via the Open Archives Initiative (OAI) and Open Geospatial Consortium (OGC) protocols, which can then be harvested by partner data centers and/or embedded in client applications.
Automation of the temperature elevation test in transformers with insulating oil.
Vicente, José Manuel Esteves; Rezek, Angelo José Junqueira; de Almeida, Antonio Tadeu Lyrio; Guimarães, Carlos Alberto Mohallem
2008-01-01
The automation of the temperature elevation test is outlined here for both the oil temperature elevation and the determination of the winding temperature elevation. While automating this test it is necessary to use four thermometers, one three-phase wattmeter, a motorized voltage variator and a Kelvin bridge to measure the resistance. All the equipments must communicate with a microcomputer, which will have the test program implemented. The system to be outlined here was initially implemented in the laboratory and, due to the good results achieved, is already in use in some transformer manufacturing plants.
Artificial intelligence and expert systems in-flight software testing
NASA Technical Reports Server (NTRS)
Demasie, M. P.; Muratore, J. F.
1991-01-01
The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.
Nguyen, Xuan Duc; Dengler, Thomas; Schulz-Linkholt, Monika; Klüter, Harald
2011-02-03
Transfusion-related acute lung injury (TRALI) is a severe complication related with blood transfusion. TRALI has usually been associated with antibodies against leukocytes. The flow cytometric granulocyte immunofluorescence test (Flow-GIFT) has been introduced for routine use when investigating patients and healthy blood donors. Here we describe a novel tool in the automation of the Flow-GIFT that enables a rapid screening of blood donations. We analyzed 440 sera from healthy female blood donors for the presence of granulocyte antibodies. As positive controls, 12 sera with known antibodies against anti-HNA-1a, -b, -2a; and -3a were additionally investigated. Whole-blood samples from HNA-typed donors were collected and the test cells isolated using cell sedimentation in a Ficoll density gradient. Subsequently, leukocytes were incubated with the respective serum and binding of antibodies was detected using FITC-conjugated antihuman antibody. 7-AAD was used to exclude dead cells. Pipetting steps were automated using the Biomek NXp Multichannel Automation Workstation. All samples were prepared in the 96-deep well plates and analyzed by flow cytometry. The standard granulocyte immunofluorescence test (GIFT) and granulocyte agglutination test (GAT) were also performed as reference methods. Sixteen sera were positive in the automated Flow-GIFT, while five of these sera were negative in the standard GIFT (anti-HNA 3a, n = 3; anti-HNA-1b, n = 1) and GAT (anti-HNA-2a, n = 1). The automated Flow-GIFT was able to detect all granulocyte antibodies, which could be only detected in GIFT in combination with GAT. In serial dilution tests, the automated Flow-GIFT detected the antibodies at higher dilutions than the reference methods GIFT and GAT. The Flow-GIFT proved to be feasible for automation. This novel high-throughput system allows an effective antigranulocyte antibody detection in a large donor population in order to prevent TRALI due to transfusion of blood products.
Idelevich, Evgeny A; Becker, Karsten; Schmitz, Janne; Knaack, Dennis; Peters, Georg; Köck, Robin
2016-01-01
Results of disk diffusion antimicrobial susceptibility testing depend on individual visual reading of inhibition zone diameters. Therefore, automated reading using camera systems might represent a useful tool for standardization. In this study, the ADAGIO automated system (Bio-Rad) was evaluated for reading disk diffusion tests of fastidious bacteria. 144 clinical isolates (68 β-haemolytic streptococci, 28 Streptococcus pneumoniae, 18 viridans group streptococci, 13 Haemophilus influenzae, 7 Moraxella catarrhalis, and 10 Campylobacter jejuni) were tested on Mueller-Hinton agar supplemented with 5% defibrinated horse blood and 20 mg/L β-NAD (MH-F, Oxoid) according to EUCAST. Plates were read manually with a ruler and automatically using the ADAGIO system. Inhibition zone diameters, indicated by the automated system, were visually controlled and adjusted, if necessary. Among 1548 isolate-antibiotic combinations, comparison of automated vs. manual reading yielded categorical agreement (CA) without visual adjustment of the automatically determined zone diameters in 81.4%. In 20% (309 of 1548) of tests it was deemed necessary to adjust the automatically determined zone diameter after visual control. After adjustment, CA was 94.8%; very major errors (false susceptible interpretation), major errors (false resistant interpretation) and minor errors (false categorization involving intermediate result), calculated according to the ISO 20776-2 guideline, accounted to 13.7% (13 of 95 resistant results), 3.3% (47 of 1424 susceptible results) and 1.4% (21 of 1548 total results), respectively, compared to manual reading. The ADAGIO system allowed for automated reading of disk diffusion testing in fastidious bacteria and, after visual validation of the automated results, yielded good categorical agreement with manual reading.
Psiha, Maria M; Vlamos, Panayiotis
2017-01-01
5G is the next generation of mobile communication technology. Current generation of wireless technologies is being evolved toward 5G for better serving end users and transforming our society. Supported by 5G cloud technology, personal devices will extend their capabilities to various applications, supporting smart life. They will have significant role in health, medical tourism, security, safety, and social life applications. The next wave of mobile communication is to mobilize and automate industries and industry processes via Machine-Type Communication (MTC) and Internet of Things (IoT). The current key performance indicators for the 5G infrastructure for the fully connected society are sufficient to satisfy most of the technical requirements in the healthcare sector. Thus, 5G can be considered as a door opener for new possibilities and use cases, many of which are as yet unknown. In this paper we present heterogeneous use cases in medical tourism sector, based on 5G infrastructure technologies and third-party cloud services.
Infrastructure development for radioactive materials at the NSLS-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprouster, D. J.; Weidner, R.; Ghose, S. K.
2018-02-01
The X-ray Powder Diffraction (XPD) Beamline at the National Synchrotron Light Source-II is a multipurpose instrument designed for high-resolution, high-energy X-ray scattering techniques. In this article, the capabilities, opportunities and recent developments in the characterization of radioactive materials at XPD are described. The overarching goal of this work is to provide researchers access to advanced synchrotron techniques suited to the structural characterization of materials for advanced nuclear energy systems. XPD is a new beamline providing high photon flux for X-ray Diffraction, Pair Distribution Function analysis and Small Angle X-ray Scattering. The infrastructure and software described here extend the existing capabilitiesmore » at XPD to accommodate radioactive materials. Such techniques will contribute crucial information to the characterization and quantification of advanced materials for nuclear energy applications. We describe the automated radioactive sample collection capabilities and recent X-ray Diffraction and Small Angle X-ray Scattering results from neutron irradiated reactor pressure vessel steels and oxide dispersion strengthened steels.« less
X-33/RLV System Health Management/ Vehicle Health Management
NASA Technical Reports Server (NTRS)
Garbos, Raymond J.; Mouyos, William
1998-01-01
To reduce operations cost, the RLV must include the following elements: highly reliable, robust subsystems designed for simple repair access with a simplified servicing infrastructure and incorporating expedited decision making about faults and anomalies. A key component for the Single Stage to Orbit (SSTO) RLV System used to meet these objectives is System Health Management (SHM). SHM deals with the vehicle component- Vehicle Health Management (VHM), the ground processing associated with the fleet (GVHM) and the Ground Infrastructure Health Management (GIHM). The objective is to provide an automated collection and paperless health decision, maintenance and logistics system. Many critical technologies are necessary to make the SHM (and more specifically VHM) practical, reliable and cost effective. Sanders is leading the design, development and integration of the SHM system for RLV and X-33 SHM (a sub-scale, sub-orbit Advanced Technology Demonstrator). This paper will present the X-33 SHM design which forms the baseline for RLV SHM. This paper will also discuss other applications of these technologies.
The Emerging Infrastructure of Autonomous Astronomy
NASA Astrophysics Data System (ADS)
Seaman, R.; Allan, A.; Axelrod, T.; Cook, K.; White, R.; Williams, R.
2007-10-01
Advances in the understanding of cosmic processes demand that sky transient events be confronted with statistical techniques honed on static phenomena. Time domain data sets require vast surveys such as LSST {http://www.lsst.org/lsst_home.shtml} and Pan-STARRS {http://www.pan-starrs.ifa.hawaii.edu}. A new autonomous infrastructure must close the loop from the scheduling of survey observations, through data archiving and pipeline processing, to the publication of transient event alerts and automated follow-up, and to the easy analysis of resulting data. The IVOA VOEvent {http://voevent.org} working group leads efforts to characterize sky transient alerts published through VOEventNet {http://voeventnet.org}. The Heterogeneous Telescope Networks (HTN {http://www.telescope-networks.org}) consortium are observatories and robotic telescope projects seeking interoperability with a long-term goal of creating an e-market for telescope time. Two projects relying on VOEvent and HTN are eSTAR {http://www.estar.org.uk} and the Thinking Telescope {http://www.thinkingtelescopes.lanl.gov} Project.
CMS Distributed Computing Integration in the LHC sustained operations era
NASA Astrophysics Data System (ADS)
Grandi, C.; Bockelman, B.; Bonacorsi, D.; Fisk, I.; González Caballero, I.; Farina, F.; Hernández, J. M.; Padhi, S.; Sarkar, S.; Sciabà, A.; Sfiligoi, I.; Spiga, F.; Úbeda García, M.; Van Der Ster, D. C.; Zvada, M.
2011-12-01
After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.
Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco
2016-01-20
This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.
Infrastructure development for radioactive materials at the NSLS-II
Sprouster, David J.; Weidner, R.; Ghose, S. K.; ...
2017-11-04
The X-ray Powder Diffraction (XPD) Beamline at the National Synchrotron Light Source-II is a multipurpose instrument designed for high-resolution, high-energy X-ray scattering techniques. In this paper, the capabilities, opportunities and recent developments in the characterization of radioactive materials at XPD are described. The overarching goal of this work is to provide researchers access to advanced synchrotron techniques suited to the structural characterization of materials for advanced nuclear energy systems. XPD is a new beamline providing high photon flux for X-ray Diffraction, Pair Distribution Function analysis and Small Angle X-ray Scattering. The infrastructure and software described here extend the existing capabilitiesmore » at XPD to accommodate radioactive materials. Such techniques will contribute crucial information to the characterization and quantification of advanced materials for nuclear energy applications. Finally, we describe the automated radioactive sample collection capabilities and recent X-ray Diffraction and Small Angle X-ray Scattering results from neutron irradiated reactor pressure vessel steels and oxide dispersion strengthened steels.« less
DES Science Portal: II- Creating Science-Ready Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fausti Neto, Angelo; et al.
We present a novel approach for creating science-ready catalogs through a software infrastructure developed for the Dark Energy Survey (DES). We integrate the data products released by the DES Data Management and additional products created by the DES collaboration in an environment known as DES Science Portal. Each step involved in the creation of a science-ready catalog is recorded in a relational database and can be recovered at any time. We describe how the DES Science Portal automates the creation and characterization of lightweight catalogs for DES Year 1 Annual Release, and show its flexibility in creating multiple catalogs withmore » different inputs and configurations. Finally, we discuss the advantages of this infrastructure for large surveys such as DES and the Large Synoptic Survey Telescope. The capability of creating science-ready catalogs efficiently and with full control of the inputs and configurations used is an important asset for supporting science analysis using data from large astronomical surveys.« less
Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing
NASA Astrophysics Data System (ADS)
Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo
2016-04-01
The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing scheme take advantage of 3D data multiplicity by continuous real time data focusing. Pre-stack reflection angle gathers G(x, θ; v) are computed at nv different velocities (by the mean of Kirchhoff depth-migration kernels, that can naturally cope with any acquisition pattern and handle irregular sampling issues). It must be noted that the analysis of pre-stack reflection angle gathers plays a key-role in automated detection: targets are identified and the best local propagation velocities are recovered through a correlation estimate computed for all the nv reflection angle gathers. Indeed, the data redundancy of 3D GPR acquisitions highly improves the proposed automatic detection reliability. The goal of real-time automated processing has been pursued without the need of specific high performance processing hardware (a simple laptop is required). Moreover, the automatization of the entire surveying process allows to obtain high quality and repeatable results without the need of skilled interpreters. The proposed acquisition procedure has been extensively tested: more than 100 Km of acquired data prove the feasibility of the proposed approach.
Mock, Ulrike; Nickolay, Lauren; Philip, Brian; Cheung, Gordon Weng-Kit; Zhan, Hong; Johnston, Ian C D; Kaiser, Andrew D; Peggs, Karl; Pule, Martin; Thrasher, Adrian J; Qasim, Waseem
2016-08-01
Novel cell therapies derived from human T lymphocytes are exhibiting enormous potential in early-phase clinical trials in patients with hematologic malignancies. Ex vivo modification of T cells is currently limited to a small number of centers with the required infrastructure and expertise. The process requires isolation, activation, transduction, expansion and cryopreservation steps. To simplify procedures and widen applicability for clinical therapies, automation of these procedures is being developed. The CliniMACS Prodigy (Miltenyi Biotec) has recently been adapted for lentiviral transduction of T cells and here we analyse the feasibility of a clinically compliant T-cell engineering process for the manufacture of T cells encoding chimeric antigen receptors (CAR) for CD19 (CAR19), a widely targeted antigen in B-cell malignancies. Using a closed, single-use tubing set we processed mononuclear cells from fresh or frozen leukapheresis harvests collected from healthy volunteer donors. Cells were phenotyped and subjected to automated processing and activation using TransAct, a polymeric nanomatrix activation reagent incorporating CD3/CD28-specific antibodies. Cells were then transduced and expanded in the CentriCult-Unit of the tubing set, under stabilized culture conditions with automated feeding and media exchange. The process was continuously monitored to determine kinetics of expansion, transduction efficiency and phenotype of the engineered cells in comparison with small-scale transductions run in parallel. We found that transduction efficiencies, phenotype and function of CAR19 T cells were comparable with existing procedures and overall T-cell yields sufficient for anticipated therapeutic dosing. The automation of closed-system T-cell engineering should improve dissemination of emerging immunotherapies and greatly widen applicability. Copyright © 2016. Published by Elsevier Inc.
Flight control system design factors for applying automated testing techniques
NASA Technical Reports Server (NTRS)
Sitz, Joel R.; Vernon, Todd H.
1990-01-01
The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.
2017-03-01
Government Accountability Office Highlights of GAO-17-322, a report to congressional committees March 2017 DOD MAJOR AUTOMATED INFORMATION ...DOD MAJOR AUTOMATED INFORMATION SYSTEMS Improvements Can Be Made in Applying Leading Practices for Managing Risk and...Testing Report to Congressional Committees March 2017 GAO-17-322 United States Government Accountability Office United States
ERIC Educational Resources Information Center
Loukina, Anastassia; Buzick, Heather
2017-01-01
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
An Automated, Experimenter-Free Method for the Standardised, Operant Cognitive Testing of Rats
Rivalan, Marion; Munawar, Humaira; Fuchs, Anna; Winter, York
2017-01-01
Animal models of human pathology are essential for biomedical research. However, a recurring issue in the use of animal models is the poor reproducibility of behavioural and physiological findings within and between laboratories. The most critical factor influencing this issue remains the experimenter themselves. One solution is the use of procedures devoid of human intervention. We present a novel approach to experimenter-free testing cognitive abilities in rats, by combining undisturbed group housing with automated, standardized and individual operant testing. This experimenter-free system consisted of an automated-operant system (Bussey-Saksida rat touch screen) connected to a home cage containing group living rats via an automated animal sorter (PhenoSys). The automated animal sorter, which is based on radio-frequency identification (RFID) technology, functioned as a mechanical replacement of the experimenter. Rats learnt to regularly and individually enter the operant chamber and remained there for the duration of the experimental session only. Self-motivated rats acquired the complex touch screen task of trial-unique non-matching to location (TUNL) in half the time reported for animals that were manually placed into the operant chamber. Rat performance was similar between the two groups within our laboratory, and comparable to previously published results obtained elsewhere. This reproducibility, both within and between laboratories, confirms the validity of this approach. In addition, automation reduced daily experimental time by 80%, eliminated animal handling, and reduced equipment cost. This automated, experimenter-free setup is a promising tool of great potential for testing a large variety of functions with full automation in future studies. PMID:28060883
Economic and workflow analysis of a blood bank automated system.
Shin, Kyung-Hwa; Kim, Hyung Hoi; Chang, Chulhun L; Lee, Eun Yup
2013-07-01
This study compared the estimated costs and times required for ABO/Rh(D) typing and unexpected antibody screening using an automated system and manual methods. The total cost included direct and labor costs. Labor costs were calculated on the basis of the average operator salaries and unit values (minutes), which was the hands-on time required to test one sample. To estimate unit values, workflows were recorded on video, and the time required for each process was analyzed separately. The unit values of ABO/Rh(D) typing using the manual method were 5.65 and 8.1 min during regular and unsocial working hours, respectively. The unit value was less than 3.5 min when several samples were tested simultaneously. The unit value for unexpected antibody screening was 2.6 min. The unit values using the automated method for ABO/Rh(D) typing, unexpected antibody screening, and both simultaneously were all 1.5 min. The total cost of ABO/Rh(D) typing of only one sample using the automated analyzer was lower than that of testing only one sample using the manual technique but higher than that of testing several samples simultaneously. The total cost of unexpected antibody screening using an automated analyzer was less than that using the manual method. ABO/Rh(D) typing using an automated analyzer incurs a lower unit value and cost than that using the manual technique when only one sample is tested at a time. Unexpected antibody screening using an automated analyzer always incurs a lower unit value and cost than that using the manual technique.
Alternative Fuels Research Laboratory
NASA Technical Reports Server (NTRS)
Surgenor, Angela D.; Klettlinger, Jennifer L.; Nakley, Leah M.; Yen, Chia H.
2012-01-01
NASA Glenn has invested over $1.5 million in engineering, and infrastructure upgrades to renovate an existing test facility at the NASA Glenn Research Center (GRC), which is now being used as an Alternative Fuels Laboratory. Facility systems have demonstrated reliability and consistency for continuous and safe operations in Fischer-Tropsch (F-T) synthesis and thermal stability testing. This effort is supported by the NASA Fundamental Aeronautics Subsonic Fixed Wing project. The purpose of this test facility is to conduct bench scale F-T catalyst screening experiments. These experiments require the use of a synthesis gas feedstock, which will enable the investigation of F-T reaction kinetics, product yields and hydrocarbon distributions. Currently the facility has the capability of performing three simultaneous reactor screening tests, along with a fourth fixed-bed reactor for catalyst activation studies. Product gas composition and performance data can be continuously obtained with an automated gas sampling system, which directly connects the reactors to a micro-gas chromatograph (micro GC). Liquid and molten product samples are collected intermittently and are analyzed by injecting as a diluted sample into designated gas chromatograph units. The test facility also has the capability of performing thermal stability experiments of alternative aviation fuels with the use of a Hot Liquid Process Simulator (HLPS) (Ref. 1) in accordance to ASTM D 3241 "Thermal Oxidation Stability of Aviation Fuels" (JFTOT method) (Ref. 2). An Ellipsometer will be used to study fuel fouling thicknesses on heated tubes from the HLPS experiments. A detailed overview of the test facility systems and capabilities are described in this paper.
NREL Serves as the Energy Department's Showcase for Cutting-Edge Fuel Cell
vehicle on loan from Hyundai through a one-year Cooperative Research and Development Agreement and a B produced at the Hydrogen Infrastructure Testing and Research Facility (HITRF) located at NREL's Energy and infrastructure as part of the Energy Department's Hydrogen Fueling Infrastructure Research and
Advanced Technologies and Methodology for Automated Ultrasonic Testing Systems Quantification
DOT National Transportation Integrated Search
2011-04-29
For automated ultrasonic testing (AUT) detection and sizing accuracy, this program developed a methodology for quantification of AUT systems, advancing and quantifying AUT systems imagecapture capabilities, quantifying the performance of multiple AUT...
Arlt, Sönke; Buchert, Ralph; Spies, Lothar; Eichenlaub, Martin; Lehmbeck, Jan T; Jahn, Holger
2013-06-01
Fully automated magnetic resonance imaging (MRI)-based volumetry may serve as biomarker for the diagnosis in patients with mild cognitive impairment (MCI) or dementia. We aimed at investigating the relation between fully automated MRI-based volumetric measures and neuropsychological test performance in amnestic MCI and patients with mild dementia due to Alzheimer's disease (AD) in a cross-sectional and longitudinal study. In order to assess a possible prognostic value of fully automated MRI-based volumetry for future cognitive performance, the rate of change of neuropsychological test performance over time was also tested for its correlation with fully automated MRI-based volumetry at baseline. In 50 subjects, 18 with amnestic MCI, 21 with mild AD, and 11 controls, neuropsychological testing and T1-weighted MRI were performed at baseline and at a mean follow-up interval of 2.1 ± 0.5 years (n = 19). Fully automated MRI volumetry of the grey matter volume (GMV) was performed using a combined stereotactic normalisation and segmentation approach as provided by SPM8 and a set of pre-defined binary lobe masks. Left and right hippocampus masks were derived from probabilistic cytoarchitectonic maps. Volumes of the inner and outer liquor space were also determined automatically from the MRI. Pearson's test was used for the correlation analyses. Left hippocampal GMV was significantly correlated with performance in memory tasks, and left temporal GMV was related to performance in language tasks. Bilateral frontal, parietal and occipital GMVs were correlated to performance in neuropsychological tests comprising multiple domains. Rate of GMV change in the left hippocampus was correlated with decline of performance in the Boston Naming Test (BNT), Mini-Mental Status Examination, and trail making test B (TMT-B). The decrease of BNT and TMT-A performance over time correlated with the loss of grey matter in multiple brain regions. We conclude that fully automated MRI-based volumetry allows detection of regional grey matter volume loss that correlates with neuropsychological performance in patients with amnestic MCI or mild AD. Because of the high level of automation, MRI-based volumetry may easily be integrated into clinical routine to complement the current diagnostic procedure.
The GENIUS Grid Portal and robot certificates: a new tool for e-Science
Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio
2009-01-01
Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747
The GENIUS Grid Portal and robot certificates: a new tool for e-Science.
Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio
2009-06-16
Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.
NASA Astrophysics Data System (ADS)
Kuo, K. S.; Rilee, M. L.
2017-12-01
Existing pathways for bringing together massive, diverse Earth Science datasets for integrated analyses burden end users with data packaging and management details irrelevant to their domain goals. The major data repositories focus on archival, discovery, and dissemination of products (files) in a standardized manner. End-users must download and then adapt these files using local resources and custom methods before analysis can proceed. This reduces scientific or other domain productivity, as scarce resources and expertise must be diverted to data processing. The Spatio-Temporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions, e.g. conditional subsetting, into integer operations, that takes into account representative spatiotemporal resolutions of the data in the datasets, which is needed for data placement alignment of geo-spatiotemporally diverse data on massive parallel resources. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain specific questions instead of expending their expertise on data processing. While STARE is not tied to any particular computing technology, we have used STARE for visualization and the SciDB array database to analyze Earth Science data on a 28-node compute cluster. STARE's automatic data placement and coupling of geometric and array indexing allows complicated data comparisons to be realized as straightforward database operations like "join." With STARE-enabled automation, SciDB+STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of integrable data, and easing result sharing. Using SciDB+STARE as part of an integrated analysis infrastructure, we demonstrate the dramatic ease of combining diametrically different datasets, i.e. gridded (NMQ radar) vs. spacecraft swath (TRMM). SciDB+STARE is an important step towards a computational infrastructure for integrating and sharing diverse, complex Earth Science data and science products derived from them.
Energy Systems Integration Laboratory | Energy Systems Integration Facility
systems test hub includes a Class 1, Division 2 space for performing tests of high-pressure hydrogen Laboratory offers the following capabilities. High-Pressure Hydrogen Systems The high-pressure hydrogen infrastructure. Key Infrastructure Robotic arm; high-pressure hydrogen; natural gas supply; standalone SCADA
UAS Integration in the NAS Project: Integrated Test and LVC Infrastructure
NASA Technical Reports Server (NTRS)
Murphy, Jim; Hoang, Ty
2015-01-01
Overview presentation of the Integrated Test and Evaluation sub-project of the Unmanned Aircraft System (UAS) in the National Airspace System (NAS). The emphasis of the presentation is the Live, Virtual, and Constructive (LVC) system (a broadly used name for classifying modeling and simulation) infrastructure and use of external assets and connection.
Decline in Radiation Hardened Microcircuit Infrastructure
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.
2015-01-01
Two areas of radiation hardened microcircuit infrastructure will be discussed: 1) The availability and performance of radiation hardened microcircuits, and, and 2) The access to radiation test facilities primarily for proton single event effects (SEE) testing. Other areas not discussed, but are a concern include: The challenge for maintaining radiation effects tool access for assurance purposes, and, the access to radiation test facilities primarily for heavy ion single event effects (SEE) testing. Status and implications will be discussed for each area.
Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC
NASA Technical Reports Server (NTRS)
Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet
1999-01-01
The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.
Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC
NASA Technical Reports Server (NTRS)
Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet
1998-01-01
The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.
Towards a geophysical decision-support system for monitoring and managing unstable slopes
NASA Astrophysics Data System (ADS)
Chambers, J. E.; Meldrum, P.; Wilkinson, P. B.; Uhlemann, S.; Swift, R. T.; Inauen, C.; Gunn, D.; Kuras, O.; Whiteley, J.; Kendall, J. M.
2017-12-01
Conventional approaches for condition monitoring, such as walk over surveys, remote sensing or intrusive sampling, are often inadequate for predicting instabilities in natural and engineered slopes. Surface observations cannot detect the subsurface precursors to failure events; instead they can only identify failure once it has begun. On the other hand, intrusive investigations using boreholes only sample a very small volume of ground and hence small scale deterioration process in heterogeneous ground conditions can easily be missed. It is increasingly being recognised that geophysical techniques can complement conventional approaches by providing spatial subsurface information. Here we describe the development and testing of a new geophysical slope monitoring system. It is built around low-cost electrical resistivity tomography instrumentation, combined with integrated geotechnical logging capability, and coupled with data telemetry. An automated data processing and analysis workflow is being developed to streamline information delivery. The development of this approach has provided the basis of a decision-support tool for monitoring and managing unstable slopes. The hardware component of the system has been operational at a number of field sites associated with a range of natural and engineered slopes for up to two years. We report on the monitoring results from these sites, discuss the practicalities of installing and maintaining long-term geophysical monitoring infrastructure, and consider the requirements of a fully automated data processing and analysis workflow. We propose that the result of this development work is a practical decision-support tool that can provide near-real-time information relating to the internal condition of problematic slopes.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-20
..., LLC, Subsidiary of Mag Industrial Automation Systems, Machesney Park, IL; Notice of Negative... automation equipment and machine tools did not contribute to worker separations at the subject facility and...' firm's declining customers. The survey revealed no imports of automation equipment and machine tools by...
Automating Media Centers and Small Libraries: A Microcomputer-Based Approach.
ERIC Educational Resources Information Center
Meghabghab, Dania Bilal
Although the general automation process can be applied to most libraries, small libraries and media centers require a customized approach. Using a systematic approach, this guide covers each step and aspect of automation in a small library setting, and combines the principles of automation with field- tested activities. After discussing needs…
Operational Based Vision Assessment Automated Vision Test Collection User Guide
2017-05-15
repeatability to support correlation analysis. The AVT research grade tests also support interservice, international, industry, and academic partnerships...software, provides information concerning various menu options and operation of the test, and provides a brief description of each of the automated vision...2802, 6 Jun 2017. TABLE OF CONTENTS (concluded) Section Page 7.0 OBVA VISION TEST DESCRIPTIONS
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
An automated qualification framework for the MeerKAT CAM (Control-And-Monitoring)
NASA Astrophysics Data System (ADS)
van den Heever, Lize; Marais, Neilen; Slabber, Martin
2016-08-01
This paper introduces and discusses the design of an Automated Qualification Framework (AQF) that was developed to automate as much as possible of the formal Qualification Testing of the Control And Monitoring (CAM) subsystem of the 64 dish MeerKAT radio telescope currently under construction in the Karoo region of South Africa. The AQF allows each Integrated CAM Test to reference the MeerKAT CAM requirement and associated verification requirement it covers and automatically produces the Qualification Test Procedure and Qualification Test Report from the test steps and evaluation steps annotated in the Integrated CAM Tests. The MeerKAT System Engineers are extremely happy with the AQF results, but mostly by the approach and process it enforces.
NASA Technical Reports Server (NTRS)
Mixon, Randolph W.; Hankins, Walter W., III; Wise, Marion A.
1988-01-01
Research at Langley AFB concerning automated space assembly is reviewed, including a Space Shuttle experiment to test astronaut ability to assemble a repetitive truss structure, testing the use of teleoperated manipulators to construct the Assembly Concept for Construction of Erectable Space Structures I truss, and assessment of the basic characteristics of manipulator assembly operations. Other research topics include the simultaneous coordinated control of dual-arm manipulators and the automated assembly of candidate Space Station trusses. Consideration is given to the construction of an Automated Space Assembly Laboratory to study and develop the algorithms, procedures, special purpose hardware, and processes needed for automated truss assembly.
Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
Extracting Information from Narratives: An Application to Aviation Safety Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Posse, Christian; Matzke, Brett D.; Anderson, Catherine M.
2005-05-12
Aviation safety reports are the best available source of information about why a flight incident happened. However, stream of consciousness permeates the narratives making difficult the automation of the information extraction task. We propose an approach and infrastructure based on a common pattern specification language to capture relevant information via normalized template expression matching in context. Template expression matching handles variants of multi-word expressions. Normalization improves the likelihood of correct hits by standardizing and cleaning the vocabulary used in narratives. Checking for the presence of negative modifiers in the proximity of a potential hit reduces the chance of false hits.more » We present the above approach in the context of a specific application, which is the extraction of human performance factors from NASA ASRS reports. While knowledge infusion from experts plays a critical role during the learning phase, early results show that in a production mode, the automated process provides information that is consistent with analyses by human subjects.« less
Towards data integration automation for the French rare disease registry.
Maaroufi, Meriem; Choquet, Rémy; Landais, Paul; Jaulent, Marie-Christine
2015-01-01
Building a medical registry upon an existing infrastructure and rooted practices is not an easy task. It is the case for the BNDMR project, the French rare disease registry, that aims to collect administrative and medical data of rare disease patients seen in different hospitals. To avoid duplicating data entry for health professionals, the project plans to deploy connectors with the existing systems to automatically retrieve data. Given the data heterogeneity and the large number of source systems, the automation of connectors creation is required. In this context, we propose a methodology that optimizes the use of existing alignment approaches in the data integration processes. The generated mappings are formalized in exploitable mapping expressions. Following this methodology, a process has been experimented on specific data types of a source system: Boolean and predefined lists. As a result, effectiveness of the used alignment approach has been enhanced and more good mappings have been detected. Nonetheless, further improvements could be done to deal with the semantic issue and process other data types.
Cartwright, Jennifer M.; Diehl, Timothy H.
2017-01-17
High-resolution digital elevation models (DEMs) derived from light detection and ranging (lidar) enable investigations of stream-channel geomorphology with much greater precision than previously possible. The U.S. Geological Survey has developed the DEM Geomorphology Toolbox, containing seven tools to automate the identification of sites of geomorphic instability that may represent sediment sources and sinks in stream-channel networks. These tools can be used to modify input DEMs on the basis of known locations of stormwater infrastructure, derive flow networks at user-specified resolutions, and identify possible sites of geomorphic instability including steep banks, abrupt changes in channel slope, or areas of rough terrain. Field verification of tool outputs identified several tool limitations but also demonstrated their overall usefulness in highlighting likely sediment sources and sinks within channel networks. In particular, spatial clusters of outputs from multiple tools can be used to prioritize field efforts to assess and restore eroding stream reaches.
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
Geostationary platform study: Advanced ESGP/evolutionary SSF accommodation study
NASA Technical Reports Server (NTRS)
1990-01-01
The implications on the evolutionary space station of accommodating geosynchronous Earth Orbit (GEO) facilities including unmanned satellites and platforms, manned elements, and transportation and servicing vehicles/elements. The latest existing definitions of typical unmanned GEO facilities and transportation and servicing vehicles/elements are utilized. The physical design, functional design, and operations implications at the space station are determined. Various concepts of the space station from past studies are utilized ranging from the IOC Multifunction Space Station to a branched transportation node space station, and the implications of the accommodation the GEO infrastructure of each type are assessed. Where possible, parametric data are provided to show the implications of variations in sizes and quantities of elements, launch rates, crew sizes, etc. The use of advanced automation, robotics equipment, and an efficient mix of manned/automated support for accomplishing necessary activities at the space station are identified and assessed. The products of this study are configuration sketches, resource requirements, trade studies, and parametric data.
Towards data integration automation for the French rare disease registry
Maaroufi, Meriem; Choquet, Rémy; Landais, Paul; Jaulent, Marie-Christine
2015-01-01
Building a medical registry upon an existing infrastructure and rooted practices is not an easy task. It is the case for the BNDMR project, the French rare disease registry, that aims to collect administrative and medical data of rare disease patients seen in different hospitals. To avoid duplicating data entry for health professionals, the project plans to deploy connectors with the existing systems to automatically retrieve data. Given the data heterogeneity and the large number of source systems, the automation of connectors creation is required. In this context, we propose a methodology that optimizes the use of existing alignment approaches in the data integration processes. The generated mappings are formalized in exploitable mapping expressions. Following this methodology, a process has been experimented on specific data types of a source system: Boolean and predefined lists. As a result, effectiveness of the used alignment approach has been enhanced and more good mappings have been detected. Nonetheless, further improvements could be done to deal with the semantic issue and process other data types. PMID:26958224
Cyberwar XXI: quantifying the unquantifiable: adaptive AI for next-generation conflict simulations
NASA Astrophysics Data System (ADS)
Miranda, Joseph; von Kleinsmid, Peter; Zalewski, Tony
2004-08-01
The era of the "Revolution in Military Affairs," "4th Generation Warfare" and "Asymmetric War" requires novel approaches to modeling warfare at the operational and strategic level of modern conflict. For example, "What if, in response to our planned actions, the adversary reacts in such-and-such a manner? What will our response be? What are the possible unintended consequences?" Next generation conflict simulation tools are required to help create and test novel courses of action (COA's) in support of real-world operations. Conflict simulations allow non-lethal and cost-effective exploration of the "what-if" of COA development. The challenge has been to develop an automated decision-support software tool which allows competing COA"s to be compared in simulated dynamic environments. Principal Investigator Joseph Miranda's research is based on modeling an integrated military, economic, social, infrastructure and information (PMESII) environment. The main effort was to develop an adaptive AI engine which models agents operating within an operational-strategic conflict environment. This was implemented in Cyberwar XXI - a simulation which models COA selection in a PMESII environment. Within this framework, agents simulate decision-making processes and provide predictive capability of the potential behavior of Command Entities. The 2003 Iraq is the first scenario ready for V&V testing.
Multirobot Lunar Excavation and ISRU Using Artificial-Neural-Tissue Controllers
NASA Astrophysics Data System (ADS)
Thangavelautham, Jekanthan; Smith, Alexander; Abu El Samid, Nader; Ho, Alexander; Boucher, Dale; Richard, Jim; D'Eleuterio, Gabriele M. T.
2008-01-01
Automation of site preparation and resource utilization on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. Such multirobot autonomous systems would require limited human support infrastructure, complement necessary manned operations and reduce overall mission risk. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot excavation tasks. An ANT approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to `breed' controllers for the task at hand in simulation and the fittest controllers are transferred onto hardware for further validation and testing. ANT facilitates `machine creativity', with the emergence of novel functionality through a process of self-organized task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot excavation task in which teams of robots with no explicit supervision can successfully avoid obstacles, interpret excavation blueprints, perform layered digging, avoid burying or trapping other robots and clear/maintain digging routes.
Progress on the Development of Future Airport Surface Wireless Communications Network
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Budinger, James M.; Brooks, David E.; Franklin, Morgan; DeHart, Steve; Dimond, Robert P.; Borden, Michael
2009-01-01
Continuing advances in airport surface management and improvements in airport surface safety are required to enable future growth in air traffic throughout the airspace, as airport arrival and departure delays create a major system bottleneck. These airport management and safety advances will be built upon improved communications, navigation, surveillance, and weather sensing, creating an information environment supporting system automation. The efficient movement of the digital data generated from these systems requires an underlying communications network infrastructure to connect data sources with the intended users with the required quality of service. Current airport surface communications consists primarily of buried copper or fiber cable. Safety related communications with mobile airport surface assets occurs over 25 kHz VHF voice and data channels. The available VHF spectrum, already congested in many areas, will be insufficient to support future data traffic requirements. Therefore, a broadband wireless airport surface communications network is considered a requirement for the future airport component of the air transportation system. Progress has been made on defining the technology and frequency spectrum for the airport surface wireless communications network. The development of a test and demonstration facility and the definition of required testing and standards development are now underway. This paper will review the progress and planned future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thangavelautham, Jekanthan; Smith, Alexander; Abu El Samid, Nader
Automation of site preparation and resource utilization on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. Such multirobot autonomous systems would require limited human support infrastructure, complement necessary manned operations and reduce overall mission risk. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot excavation tasks. An ANT approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to 'breed' controllersmore » for the task at hand in simulation and the fittest controllers are transferred onto hardware for further validation and testing. ANT facilitates 'machine creativity', with the emergence of novel functionality through a process of self-organized task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot excavation task in which teams of robots with no explicit supervision can successfully avoid obstacles, interpret excavation blueprints, perform layered digging, avoid burying or trapping other robots and clear/maintain digging routes.« less
Galileo battery testing and the impact of test automation
NASA Technical Reports Server (NTRS)
Pertuch, W. T.; Dils, C. T.
1985-01-01
Test complexity, changes of test specifications, and the demand for tight control of tests led to the development of automated testing used for Galileo and other projects. The use of standardized interfacing, i.e., IEEE-488, with desktop computers and test instruments, resulted in greater reliability, repeatability, and accuracy of both control and data reporting. Increased flexibility of test programming has reduced costs by permitting a wide spectrum of test requirements at one station rather than many stations.
Involving Users to Improve the Collaborative Logical Framework
2014-01-01
In order to support collaboration in web-based learning, there is a need for an intelligent support that facilitates its management during the design, development, and analysis of the collaborative learning experience and supports both students and instructors. At aDeNu research group we have proposed the Collaborative Logical Framework (CLF) to create effective scenarios that support learning through interaction, exploration, discussion, and collaborative knowledge construction. This approach draws on artificial intelligence techniques to support and foster an effective involvement of students to collaborate. At the same time, the instructors' workload is reduced as some of their tasks—especially those related to the monitoring of the students behavior—are automated. After introducing the CLF approach, in this paper, we present two formative evaluations with users carried out to improve the design of this collaborative tool and thus enrich the personalized support provided. In the first one, we analyze, following the layered evaluation approach, the results of an observational study with 56 participants. In the second one, we tested the infrastructure to gather emotional data when carrying out another observational study with 17 participants. PMID:24592196
NASA Technical Reports Server (NTRS)
Price, Kent M.; Holdridge, Mark; Odubiyi, Jide; Jaworski, Allan; Morgan, Herbert K.
1991-01-01
The results are summarized of an unattended network operations technology assessment study for the Space Exploration Initiative (SEI). The scope of the work included: (1) identified possible enhancements due to the proposed Mars communications network; (2) identified network operations on Mars; (3) performed a technology assessment of possible supporting technologies based on current and future approaches to network operations; and (4) developed a plan for the testing and development of these technologies. The most important results obtained are as follows: (1) addition of a third Mars Relay Satellite (MRS) and MRS cross link capabilities will enhance the network's fault tolerance capabilities through improved connectivity; (2) network functions can be divided into the six basic ISO network functional groups; (3) distributed artificial intelligence technologies will augment more traditional network management technologies to form the technological infrastructure of a virtually unattended network; and (4) a great effort is required to bring the current network technology levels for manned space communications up to the level needed for an automated fault tolerance Mars communications network.
Improving the discoverability, accessibility, and citability of omics datasets: a case report.
Darlington, Yolanda F; Naumov, Alexey; McOwiti, Apollo; Kankanamge, Wasula H; Becnel, Lauren B; McKenna, Neil J
2017-03-01
Although omics datasets represent valuable assets for hypothesis generation, model testing, and data validation, the infrastructure supporting their reuse lacks organization and consistency. Using nuclear receptor signaling transcriptomic datasets as proof of principle, we developed a model to improve the discoverability, accessibility, and citability of published omics datasets. Primary datasets were retrieved from archives, processed to extract data points, then subjected to metadata enrichment and gap filling. The resulting secondary datasets were exposed on responsive web pages to support mining of gene lists, discovery of related datasets, and single-click citation integration with popular reference managers. Automated processes were established to embed digital object identifier-driven links to the secondary datasets in associated journal articles, small molecule and gene-centric databases, and a dataset search engine. Our model creates multiple points of access to reprocessed and reannotated derivative datasets across the digital biomedical research ecosystem, promoting their visibility and usability across disparate research communities. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Development of Automated Testing Tools for Traffic Control Signals and Devices
DOT National Transportation Integrated Search
2012-01-30
Through a coordinated effort among the electrical engineering research team of the Florida State University (FSU) and key Florida Department of Transportation (FDOT) personnel, an automated testing system for National Electrical Manufacturers Associa...
Automation of the space station core module power management and distribution system
NASA Technical Reports Server (NTRS)
Weeks, David J.
1988-01-01
Under the Advanced Development Program for Space Station, Marshall Space Flight Center has been developing advanced automation applications for the Power Management and Distribution (PMAD) system inside the Space Station modules for the past three years. The Space Station Module Power Management and Distribution System (SSM/PMAD) test bed features three artificial intelligence (AI) systems coupled with conventional automation software functioning in an autonomous or closed-loop fashion. The AI systems in the test bed include a baseline scheduler/dynamic rescheduler (LES), a load shedding management system (LPLMS), and a fault recovery and management expert system (FRAMES). This test bed will be part of the NASA Systems Autonomy Demonstration for 1990 featuring cooperating expert systems in various Space Station subsystem test beds. It is concluded that advanced automation technology involving AI approaches is sufficiently mature to begin applying the technology to current and planned spacecraft applications including the Space Station.
NASA Astrophysics Data System (ADS)
Perin, A.; Dhalla, F.; Gayet, P.; Serio, L.
2017-12-01
SM18 is CERN main facility for testing superconducting accelerator magnets and superconducting RF cavities. Its cryogenic infrastructure will have to be significantly upgraded in the coming years, starting in 2019, to meet the testing requirements for the LHC High Luminosity project and for the R&D program for superconducting magnets and RF equipment until 2023 and beyond. This article presents the assessment of the cryogenic needs based on the foreseen test program and on past testing experience. The current configuration of the cryogenic infrastructure is presented and several possible upgrade scenarios are discussed. The chosen upgrade configuration is then described and the characteristics of the main newly required cryogenic equipment, in particular a new 35 g/s helium liquefier, are presented. The upgrade implementation strategy and plan to meet the required schedule are then described.
Test procedure for validation of automated distress data : project summary.
DOT National Transportation Integrated Search
2017-01-01
For distress surveys of asphalt pavements, the automated results from two vendors compared reasonably closely in ratings to the manual methods. In addition, automated ratings for jointed concrete pavement show much greater inconsistency between diffe...
Development of an automated fuzing station for the future armored resupply vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chesser, J.B.; Jansen, J.F.; Lloyd, P.D.
1995-03-01
The US Army is developing the Advanced Field Artillery System (SGSD), a next generation armored howitzer. The Future Armored Resupply Vehicle (FARV) will be its companion ammunition resupply vehicle. The FARV with automate the supply of ammunition and fuel to the AFAS which will increase capabilities over the current system. One of the functions being considered for automation is ammunition processing. Oak Ridge National Laboratory is developing equipment to demonstrate automated ammunition processing. One of the key operations to be automated is fuzing. The projectiles are initially unfuzed, and a fuze must be inserted and threaded into the projectile asmore » part of the processing. A constraint on the design solution is that the ammunition cannot be modified to simplify automation. The problem was analyzed to determine the alignment requirements. Using the results of the analysis, ORNL designed, built, and tested a test stand to verify the selected design solution.« less
Benchmarking infrastructure for mutation text mining
2014-01-01
Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600
Benchmarking infrastructure for mutation text mining.
Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo
2014-02-25
Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.
Merat, Natasha; Louw, Tyron; Madigan, Ruth; Wilbrink, Marc; Schieben, Anna
2018-03-31
As the desire for deploying automated ("driverless") vehicles increases, there is a need to understand how they might communicate with other road users in a mixed traffic, urban, setting. In the absence of an active and responsible human controller in the driving seat, who might currently communicate with other road users in uncertain/conflicting situations, in the future, understanding a driverless car's behaviour and intentions will need to be relayed via easily comprehensible, intuitive and universally intelligible means, perhaps presented externally via new vehicle interfaces. This paper reports on the results of a questionnaire-based study, delivered to 664 participants, recruited during live demonstrations of an Automated Road Transport Systems (ARTS; SAE Level 4), in three European cities. The questionnaire sought the views of pedestrians and cyclists, focussing on whether respondents felt safe interacting with ARTS in shared space, and also what externally presented travel behaviour information from the ARTS was important to them. Results showed that most pedestrians felt safer when the ARTS were travelling in designated lanes, rather than in shared space, and the majority believed they had priority over the ARTS, in the absence of such infrastructure. Regardless of lane demarcations, all respondents highlighted the importance of receiving some communication information about the behaviour of the ARTS, with acknowledgement of their detection by the vehicle being the most important message. There were no clear patterns across the respondents, regarding preference of modality for these external messages, with cultural and infrastructural differences thought to govern responses. Generally, however, conventional signals (lights and beeps) were preferred to text-based messages and spoken words. The results suggest that until these driverless vehicles are able to provide universally comprehensible externally presented information or messages during interaction with other road users, they are likely to contribute to confusing and conflicting interactions between these actors, especially in a shared space setting, which may, therefore, reduce efficient traffic flow. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Exploring Cognition Using Software Defined Radios for NASA Missions
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Reinhart, Richard C.
2016-01-01
NASA missions typically operate using a communication infrastructure that requires significant schedule planning with limited flexibility when the needs of the mission change. Parameters such as modulation, coding scheme, frequency, and data rate are fixed for the life of the mission. This is due to antiquated hardware and software for both the space and ground assets and a very complex set of mission profiles. Automated techniques in place by commercial telecommunication companies are being explored by NASA to determine their usability by NASA to reduce cost and increase science return. Adding cognition the ability to learn from past decisions and adjust behavior is also being investigated. Software Defined Radios are an ideal way to implement cognitive concepts. Cognition can be considered in many different aspects of the communication system. Radio functions, such as frequency, modulation, data rate, coding and filters can be adjusted based on measurements of signal degradation. Data delivery mechanisms and route changes based on past successes and failures can be made to more efficiently deliver the data to the end user. Automated antenna pointing can be added to improve gain, coverage, or adjust the target. Scheduling improvements and automation to reduce the dependence on humans provide more flexible capabilities. The Cognitive Communications project, funded by the Space Communication and Navigation Program, is exploring these concepts and using the SCaN Testbed on board the International Space Station to implement them as they evolve. The SCaN Testbed contains three Software Defined Radios and a flight computer. These four computing platforms, along with a tracking antenna system and the supporting ground infrastructure, will be used to implement various concepts in a system similar to those used by missions. Multiple universities and SBIR companies are supporting this investigation. This paper will describe the cognitive system ideas under consideration and the plan for implementing them on platforms, including the SCaN Testbed. Discussions in the paper will include how these concepts might be used to reduce cost and improve the science return for NASA missions.
VIM: A Platform for Violent Intent Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.; Schryver, Jack C.; Whitney, Paul D.
2009-03-31
Radical and contentious political/religious activism may or may not evolve into violent behavior depending on contextual factors related to social, political, cultural and infrastructural conditions. Significant theoretical advances have been made in understanding these contextual factors and the import of their interrelations. However, there has been relative little progress in the development of processes and capabilities which leverage such theoretical advances to automate the anticipatory analysis of violent intent. In this paper, we describe a framework which implements such processes and capabilities, and discuss the implications of using the resulting system to assess the emergence of radicalization leading to violence.
CASE tools and UML: state of the ART.
Agarwal, S
2001-05-01
With increasing need for automated tools to assist complex systems development, software design methods are becoming popular. This article analyzes the state of art in computer-aided software engineering (CASE) tools and unified modeling language (UML), focusing on their evolution, merits, and industry usage. It identifies managerial issues for the tools' adoption and recommends an action plan to select and implement them. While CASE and UML offer inherent advantages like cheaper, shorter, and efficient development cycles, they suffer from poor user satisfaction. The critical success factors for their implementation include, among others, management and staff commitment, proper corporate infrastructure, and user training.
NASA Technical Reports Server (NTRS)
1995-01-01
The NASA Advisory Council Task Force on the Shuttle-Mir rendezvous and docking missions examine a number of specific issues related to the Shuttle-Mir program. Three teams composed of Task Force members and technical advisors were formed to address the follow issues: preliminary results from STS-71 and the status of preparations for STS-74; NASA's presence in Russia; and NASA's automated data processing and telecommunications (ADP/T) infrastructure in Russia. The three review team reports have been included in the fifth report of the Task Force.
Programs Model the Future of Air Traffic Management
NASA Technical Reports Server (NTRS)
2010-01-01
Through Small Business Innovation Research (SBIR) contracts with Ames Research Center, Intelligent Automation Inc., based in Rockville, Maryland, advanced specialized software the company had begun developing with U.S. Department of Defense funding. The agent-based infrastructure now allows NASA's Airspace Concept Evaluation System to explore ways of improving the utilization of the National Airspace System (NAS), providing flexible modeling of every part of the NAS down to individual planes, airports, control centers, and even weather. The software has been licensed to a number of aerospace and robotics customers, and has even been used to model the behavior of crowds.
Mlinaric, Ana; Milos, Marija; Coen Herak, Désirée; Fucek, Mirjana; Rimac, Vladimira; Zadro, Renata; Rogic, Dunja
2018-02-23
The need to satisfy high-throughput demands for laboratory tests continues to be a challenge. Therefore, we aimed to automate postanalytical phase in hematology and coagulation laboratory by autovalidation of complete blood count (CBC) and routine coagulation test results (prothrombin time [PT], international normalized ratio [PT-INR], activated partial thromboplastin time [APTT], fibrinogen, antithrombin activity [AT] and thrombin time [TT]). Work efficacy and turnaround time (TAT) before and after implementation of automated solutions will be compared. Ordering panels tailored to specific patient populations were implemented. Rerun and reflex testing rules were set in the respective analyzers' software (Coulter DxH Connectivity 1601, Beckman Coulter, FL, USA; AutoAssistant, Siemens Healthcare Diagnostics, Germany), and sample status information was transferred into the laboratory information system. To evaluate if the automation improved TAT and efficacy, data from manually verified results in September and October of 2015 were compared with the corresponding period in 2016 when autovalidation was implemented. Autovalidation rates of 63% for CBC and 65% for routine coagulation test results were achieved. At the TAT of 120 min, the percentage of reported results increased substantially for all analyzed tests, being above 90% for CBC, PT, PT-INR and fibrinogen and 89% for APTT. This output was achieved with three laboratory technicians less compared with the period when the postanalytical phase was not automated. Automation allowed optimized laboratory workflow for specific patient populations, thereby ensuring standardized results reporting. Autovalidation of test results proved to be an efficient tool for improvement of laboratory work efficacy and TAT.
Automated touch sensing in the mouse tapered beam test using Raspberry Pi.
Ardesch, Dirk Jan; Balbi, Matilde; Murphy, Timothy H
2017-11-01
Rodent models of neurological disease such as stroke are often characterized by motor deficits. One of the tests that are used to assess these motor deficits is the tapered beam test, which provides a sensitive measure of bilateral motor function based on foot faults (slips) made by a rodent traversing a gradually narrowing beam. However, manual frame-by-frame scoring of video recordings is necessary to obtain test results, which is time-consuming and prone to human rater bias. We present a cost-effective method for automated touch sensing in the tapered beam test. Capacitive touch sensors detect foot faults onto the beam through a layer of conductive paint, and results are processed and stored on a Raspberry Pi computer. Automated touch sensing using this method achieved high sensitivity (96.2%) as compared to 'gold standard' manual video scoring. Furthermore, it provided a reliable measure of lateralized motor deficits in mice with unilateral photothrombotic stroke: results indicated an increased number of contralesional foot faults for up to 6days after ischemia. The automated adaptation of the tapered beam test produces results immediately after each trial, without the need for labor-intensive post-hoc video scoring. It also increases objectivity of the data as it requires less experimenter involvement during analysis. Automated touch sensing may provide a useful adaptation to the existing tapered beam test in mice, while the simplicity of the hardware lends itself to potential further adaptations to related behavioral tests. Copyright © 2017 Elsevier B.V. All rights reserved.
2011 Information Systems Summit 2
2011-04-06
to automate. Some criteria that should be considered: – Are the tests easy to automate? What makes a test easy to automate is the ability to script...ANSI-748-B defines 32 criteria needs for a FAR/DFAR compliant Earned Value Management System. These criteria address 5 areas of Earned Value...are the basis of Increasing the Probability of Success of any program. But there are 11 critical criteria that must be present not matter what
Information Technology Support for Clinical Genetic Testing within an Academic Medical Center.
Aronson, Samuel; Mahanta, Lisa; Ros, Lei Lei; Clark, Eugene; Babb, Lawrence; Oates, Michael; Rehm, Heidi; Lebo, Matthew
2016-01-20
Academic medical centers require many interconnected systems to fully support genetic testing processes. We provide an overview of the end-to-end support that has been established surrounding a genetic testing laboratory within our environment, including both laboratory and clinician facing infrastructure. We explain key functions that we have found useful in the supporting systems. We also consider ways that this infrastructure could be enhanced to enable deeper assessment of genetic test results in both the laboratory and clinic.
Summers, Thomas; Johnson, Viviana V; Stephan, John P; Johnson, Gloria J; Leonard, George
2009-08-01
Massive transfusion of D- trauma patients in the combat setting involves the use of D+ red blood cells (RBCs) or whole blood along with suboptimal pretransfusion test result documentation. This presents challenges to the transfusion service of tertiary care military hospitals who ultimately receive these casualties because initial D typing results may only reflect the transfused RBCs. After patients are stabilized, mixed-field reaction results on D typing indicate the patient's true inherited D phenotype. This case series illustrates the utility of automated gel column agglutination in detecting mixed-field reactions in these patients. The transfusion service test results, including the automated gel column agglutination D typing results, of four massively transfused D- patients transfused D+ RBCs is presented. To test the sensitivity of the automated gel column agglutination method in detecting mixed-field agglutination reactions, a comparative analysis of three automated technologies using predetermined mixtures of D+ and D- RBCs is also presented. The automated gel column agglutination method detected mixed-field agglutination in D typing in all four patients and in the three prepared control specimens. The automated microwell tube method identified one of the three prepared control specimens as indeterminate, which was subsequently manually confirmed as a mixed-field reaction. The automated solid-phase method was unable to detect any mixed fields. The automated gel column agglutination method provides a sensitive means for detecting mixed-field agglutination reactions in the determination of the true inherited D phenotype of combat casualties transfused massive amounts of D+ RBCs.
Automated Non-Destructive Testing Array Evaluation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, T; Zavaljevski, N; Bakhtiari, S
2004-12-24
Automated Non-Destructive Testing Array Evaluation System (ANTARES) sofeware alogrithms were developed for use on X-probe(tm) data. Data used for algorithm development and preliminary perfomance determination was obtained for USNRC mock-up at Argone and data from EPRI.
NASA Technical Reports Server (NTRS)
Harrison, Cecil A.
1986-01-01
The efforts to automate the electromagentic compatibility (EMC) test facilites at Marshall Flight Center were examined. A battery of nine standard tests is to be integrated by means of a desktop computer-controller in order to provide near real-time data assessment, store the data acquired during testing on flexible disk, and provide computer production of the certification report.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-06
... designed to replace a specific legacy ACS function. Each release will begin with a test and will end with... Pre-Approval Please be advised that this first phase of the DIS test is limited to the above CBP and... all interested parties to comment on the design, implementation and conduct of the test at any time...
ERIC Educational Resources Information Center
Lee, William M.; And Others
Projects to develop an automated item banking and test development system have been undertaken on several occasions at the Air Force Human Resources Laboratory (AFHRL) throughout the past 10 years. Such a system permits the construction of tests in far less time and with a higher degree of accuracy than earlier test construction procedures. This…
Automated Microwave Dielectric Constant Measurement
1987-03-01
IJSWC TR 86-46 AD.-A 184 182 AUTOMATED MICROWAVE DIELECTRIC CONSTANT MEASUREMENT SYTIEM BY B. C. GLANCY A. KRALL PESEARCH AND TECHNOLOGY DEPARTMENT...NO0. NO. ACCESSION NO. Silver Spring, Maryland 20903-500061152N ZROO1 ZRO131 R1AA29 11. TITLE (Include Security Classification) AUTOMATED MICROWAVE ...constants as a funct on of microwave frequency has been simplified using an automated testing apparatus. This automated procedure is based on the use of a
ERIC Educational Resources Information Center
Lingard, Bob; Sellar, Sam; Savage, Glenn C.
2014-01-01
This paper examines the re-articulation of social justice as equity in schooling policy through national and global testing and data infrastructures. It focuses on the Australian National Assessment Program--Literacy and Numeracy (NAPLAN) and the OECD's Programme for International Student Assessment (PISA). We analyse the discursive reconstitution…
Monitoring the Performance of Human and Automated Scores for Spoken Responses
ERIC Educational Resources Information Center
Wang, Zhen; Zechner, Klaus; Sun, Yu
2018-01-01
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Yoon, Nara; Do, In-Gu; Cho, Eun Yoon
2014-09-01
Easy and accurate HER2 testing is essential when considering the prognostic and predictive significance of HER2 in breast cancer. The use of a fully automated, quantitative FISH assay would be helpful to detect HER2 amplification in breast cancer tissue specimens with reduced inter-laboratory variability. We compared the concordance of HER2 status as assessed by an automated FISH staining system to manual FISH testing. Using 60 formalin-fixed paraffin-embedded breast carcinoma specimens, we assessed HER2 immunoexpression with two antibodies (DAKO HercepTest and CB11). In addition, HER2 status was evaluated with automated FISH using the Leica FISH System for BOND and a manual FISH using the Abbott PathVysion DNA Probe Kit. All but one specimen were successfully stained using both FISH methods. When the data were divided into two groups according to HER2/CEP17 ratio, positive and negative, the results from both the automated and manual FISH techniques were identical for all 59 evaluable specimens. The HER2 and CEP17 copy numbers and HER2/CEP17 ratio showed great agreement between both FISH methods. The automated FISH technique was interpretable with signal intensity similar to those of the manual FISH technique. In contrast with manual FISH, the automated FISH technique showed well-preserved architecture due to low membrane digestion. HER2 immunohistochemistry and FISH results showed substantial significant agreement (κ = 1.0, p < 0.001). HER2 status can be reliably determined using a fully automated HER2 FISH system with high concordance to the well-established manual FISH method. Because of stable signal intensity and high staining quality, the automated FISH technique may be more appropriate than manual FISH for routine applications. © 2013 APMIS. Published by John Wiley & Sons Ltd.
The BAARA (Biological AutomAted RAdiotracking) System: A New Approach in Ecological Field Studies
Řeřucha, Šimon; Bartonička, Tomáš; Jedlička, Petr; Čížek, Martin; Hlouša, Ondřej; Lučan, Radek; Horáček, Ivan
2015-01-01
Radiotracking is an important and often the only possible method to explore specific habits and the behaviour of animals, but it has proven to be very demanding and time-consuming, especially when frequent positioning of a large group is required. Our aim was to address this issue by making the process partially automated, to mitigate the demands and related costs. This paper presents a novel automated tracking system that consists of a network of automated tracking stations deployed within the target area. Each station reads the signals from telemetry transmitters, estimates the bearing and distance of the tagged animals and records their position. The station is capable of tracking a theoretically unlimited number of transmitters on different frequency channels with the period of 5–15 seconds per single channel. An ordinary transmitter that fits within the supported frequency band might be used with BAARA (Biological AutomAted RAdiotracking); an extra option is the use of a custom-programmable transmitter with configurable operational parameters, such as the precise frequency channel or the transmission parameters. This new approach to a tracking system was tested for its applicability in a series of field and laboratory tests. BAARA has been tested within fieldwork explorations of Rousettus aegyptiacus during field trips to Dakhla oasis in Egypt. The results illustrate the novel perspective which automated radiotracking opens for the study of spatial behaviour, particularly in addressing topics in the domain of population ecology. PMID:25714910
Proof-of-concept automation of propellant processing
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Schallhorn, P. A.
1989-01-01
For space-based propellant production, automation of the process is needed. Currently, all phases of terrestrial production have some form of human interaction. A mixer was acquired to help perform the tasks of automation. A heating system to be used with the mixer was designed, built, and installed. Tests performed on the heating system verify design criteria. An IBM PS/2 personal computer was acquired for the future automation work. It is hoped that some the mixing process itself will be automated. This is a concept demonstration task; proving that propellant production can be automated reliably.
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
Implementation of an automated test setup for measuring electrical conductance of concrete.
DOT National Transportation Integrated Search
2007-01-01
This project was designed to provide the Virginia Department of Transportation (VDOT) with an automated laboratory setup for performing the rapid chloride permeability test (RCPT) to measure the electrical conductance of concrete in accordance with a...
NASA Astrophysics Data System (ADS)
Zajic, D.; Pace, J. C.; Whiteman, C. D.; Hoch, S.
2011-12-01
This presentation describes a new facility at Dugway Proving Ground (DPG), Utah that can be used to study airflow over complex terrain, and to evaluate how airflow over a mountain barrier affects wind patterns over adjacent flatter terrain. DPG's primary mission is to conduct testing, training, and operational assessments of chemical and biological weapon systems. These operations require very precise weather forecasts. Most test operations at DPG are conducted on fairly flat test ranges having uniform surface cover, where airflow patterns are generally well-understood. However, the DPG test ranges are located alongside large, isolated mountains, most notably Granite Mountain, Camelback Mountain, and the Cedar Mountains. Airflows generated over, or influenced by, these mountains can affect wind patterns on the test ranges. The new facility, the Granite Mountain Atmospheric Sciences Testbed, or GMAST, is designed to facilitate studies of airflow interactions with topography. This facility will benefit DPG by improving understanding of how mountain airflows interact with the test range conditions. A core infrastructure of weather sensors around and on Granite Mountain has been developed including instrumented towers and remote sensors, along with automated data collection and archival systems. GMAST is expected to be in operation for a number of years and will provide a reference domain for mountain meteorology studies, with data useful for analysts, modelers and theoreticians. Visiting scientists are encouraged to collaborate with DPG personnel to utilize this valuable scientific resource and to add further equipment and scientific designs for both short-term and long-term atmospheric studies. Several of the upcoming MATERHORN (MountAin TERrain atmospHeric mOdeling and obseRvatioNs) project field tests will be conducted at DPG, giving an example of GMAST utilization and collaboration between DPG and visiting scientists.
NASA Technical Reports Server (NTRS)
Dowden, Donald J.; Bessette, Denis E.
1987-01-01
The AFTI F-16 Automated Maneuvering Attack System has undergone developmental and demonstration flight testing over a total of 347.3 flying hours in 237 sorties. The emphasis of this phase of the flight test program was on the development of automated guidance and control systems for air-to-air and air-to-ground weapons delivery, using a digital flight control system, dual avionics multiplex buses, an advanced FLIR sensor with laser ranger, integrated flight/fire-control software, advanced cockpit display and controls, and modified core Multinational Stage Improvement Program avionics.
van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier
2016-01-01
Objective Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. Setting The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Primary and secondary outcome measures Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Results Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Conclusions Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce human errors in patient identification and transcribing of results. PMID:27503860
van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier
2016-08-08
Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce human errors in patient identification and transcribing of results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing
NASA Astrophysics Data System (ADS)
Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel
Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.
A compendium of controlled diffusion blades generated by an automated inverse design procedure
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1989-01-01
A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.
Spaceport Command and Control System Software Development
NASA Technical Reports Server (NTRS)
Glasser, Abraham
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires a large amount of intensive testing that will properly measure the capabilities of the system. Automating the test procedures would save the project money from human labor costs, as well as making the testing process more efficient. Therefore, the Exploration Systems Division (formerly the Electrical Engineering Division) at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
The SSM/PMAD automated test bed project
NASA Technical Reports Server (NTRS)
Lollar, Louis F.
1991-01-01
The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).
Advanced E-O test capability for Army Next-Generation Automated Test System (NGATS)
NASA Astrophysics Data System (ADS)
Errea, S.; Grigor, J.; King, D. F.; Matis, G.; McHugh, S.; McKechnie, J.; Nehring, B.
2015-05-01
The Future E-O (FEO) program was established to develop a flexible, modular, automated test capability as part of the Next Generation Automatic Test System (NGATS) program to support the test and diagnostic needs of currently fielded U.S. Army electro-optical (E-O) devices, as well as being expandable to address the requirements of future Navy, Marine Corps and Air Force E-O systems. Santa Barbara infrared (SBIR) has designed, fabricated, and delivered three (3) prototype FEO for engineering and logistics evaluation prior to anticipated full-scale production beginning in 2016. In addition to presenting a detailed overview of the FEO system hardware design, features and testing capabilities, the integration of SBIR's EO-IR sensor and laser test software package, IRWindows 4™, into FEO to automate the test execution, data collection and analysis, archiving and reporting of results is also described.
Huber, A R; Méndez, A; Brunner-Agten, S
2013-01-01
Automatia, an ancient Greece goddess of luck who makes things happen by themselves and on her own will without human engagement, is present in our daily life in the medical laboratory. Automation has been introduced and perfected by clinical chemistry and since then expanded into other fields such as haematology, immunology, molecular biology and also coagulation testing. The initial small and relatively simple standalone instruments have been replaced by more complex systems that allow for multitasking. Integration of automated coagulation testing into total laboratory automation has become possible in the most recent years. Automation has many strengths and opportunities if weaknesses and threats are respected. On the positive side, standardization, reduction of errors, reduction of cost and increase of throughput are clearly beneficial. Dependence on manufacturers, high initiation cost and somewhat expensive maintenance are less favourable factors. The modern lab and especially the todays lab technicians and academic personnel in the laboratory do not add value for the doctor and his patients by spending lots of time behind the machines. In the future the lab needs to contribute at the bedside suggesting laboratory testing and providing support and interpretation of the obtained results. The human factor will continue to play an important role in testing in haemostasis yet under different circumstances.
The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony
2015-12-01
Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.
NASA Astrophysics Data System (ADS)
Xie, Dengling; Xie, Yanjun; Liu, Peng; Tong, Lieshu; Chu, Kaiqin; Smith, Zachary J.
2017-02-01
Current flow-based blood counting devices require expensive and centralized medical infrastructure and are not appropriate for field use. In this paper we report a method to count red blood cells, white blood cells as well as platelets through a low-cost and fully-automated blood counting system. The approach consists of using a compact, custom-built microscope with large field-of-view to record bright-field and fluorescence images of samples that are diluted with a single, stable reagent mixture and counted using automatic algorithms. Sample collection is performed manually using a spring loaded lancet, and volume-metering capillary tubes. The capillaries are then dropped into a tube of pre-measured reagents and gently shaken for 10-30 seconds. The sample is loaded into a measurement chamber and placed on a custom 3D printed platform. Sample translation and focusing is fully automated, and a user has only to press a button for the measurement and analysis to commence. Cost of the system is minimized through the use of custom-designed motorized components. We performed a series of comparative experiments by trained and untrained users on blood from adults and children. We compare the performance of our system, as operated by trained and untrained users, to the clinical gold standard using a Bland-Altman analysis, demonstrating good agreement of our system to the clinical standard. The system's low cost, complete automation, and good field performance indicate that it can be successfully translated for use in low-resource settings where central hematology laboratories are not accessible.
A conceptual model of the automated credibility assessment of the volunteered geographic information
NASA Astrophysics Data System (ADS)
Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.
2014-02-01
The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.
The Chandra Source Catalog: Processing and Infrastructure
NASA Astrophysics Data System (ADS)
Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.
2009-09-01
Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.
Stokes, A F; Banich, M T; Elledge, V C
1991-08-01
The FAA has expressed concern that flight safety could be compromised by undetected cognitive impairment in pilots due to conditions such as substance abuse, mental illness, and neuropsychological problems. Interest has been shown in the possibility of adding a brief "mini-mental exam," or a simple automated test-battery to the standard flight medical to screen for such conditions. The research reported here involved the empirical evaluation of two "mini-mental exams," two paper-and-pencil test batteries, and a prototype version of an automated screening battery. Sensitivity, specificity, and positive predictive value were calculated for each sub-task in a discriminant study of 54 pilots and 62 individuals from a heterogeneous clinical population. Results suggest that the "mini-mental exams" are poor candidates for a screening test. The automated battery showed the best discrimination performance, in part because of the incorporation of dual-task tests of divided attention performance. These tests appear to be particularly sensitive to otherwise difficult-to-detect cognitive impairments of a mild or subtle nature. The use of an automated battery of tests as a screening instrument does appear to be feasible in principle, but the practical success of a screening program is heavily dependent upon the actual prevalence of cognitive impairment in the medical applicant population.
NASA Technical Reports Server (NTRS)
Lange, R. Connor
2012-01-01
Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.
Keep the driver in control: Automating automobiles of the future.
Banks, Victoria A; Stanton, Neville A
2016-03-01
Automated automobiles will be on our roads within the next decade but the role of the driver has not yet been formerly recognised or designed. Rather, the driver is often left in a passive monitoring role until they are required to reclaim control from the vehicle. This research aimed to test the idea of driver-initiated automation, in which the automation offers decision support that can be either accepted or ignored. The test case examined a combination of lateral and longitudinal control in addition to an auto-overtake system. Despite putting the driver in control of the automated systems by enabling them to accept or ignore behavioural suggestions (e.g. overtake), there were still issues associated with increased workload and decreased trust. These issues are likely to have arisen due to the way in which the automated system has been designed. Recommendations for improvements in systems design have been made which are likely to improve trust and make the role of the driver more transparent concerning their authority over the automated system. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-20
... Commissioner of CBP with authority to conduct limited test programs or procedures designed to evaluate planned.... Specifically, CBP is looking for test participants to include: 2-3 Ocean Carriers. At least one must be filing... their software ready to test with CBP once CBP begins the certification process. CBP will post the...
Golden, Sherita Hill; Hager, Daniel; Gould, Lois J; Mathioudakis, Nestoras; Pronovost, Peter J
2017-01-01
In a complex health system, it is important to establish a systematic and data-driven approach to identifying needs. The Diabetes Clinical Community (DCC) of Johns Hopkins Medicine's Armstrong Institute for Patient Safety and Quality developed a gap analysis tool and process to establish the system's current state of inpatient diabetes care. The collectively developed tool assessed the following areas: program infrastructure; protocols, policies, and order sets; patient and health care professional education; and automated data access. For the purposes of this analysis, gaps were defined as those instances in which local resources, infrastructure, or processes demonstrated a variance against the current national evidence base or institutionally defined best practices. Following the gap analysis, members of the DCC, in collaboration with health system leadership, met to identify priority areas in order to integrate and synergize diabetes care resources and efforts to enhance quality and reduce disparities in care across the system. Key gaps in care identified included lack of standardized glucose management policies, lack of standardized training of health care professionals in inpatient diabetes management, and lack of access to automated data collection and analysis. These results were used to gain resources to support collaborative diabetes health system initiatives and to successfully obtain federal research funding to develop and pilot a pragmatic diabetes educational intervention. At a health system level, the summary format of this gap analysis tool is an effective method to clearly identify disparities in care to focus efforts and resources to improve care delivery. Copyright © 2016 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L.; Page, Terry L.; Bhuva, Bharat; Broadie, Kendal
2016-01-01
Background Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. New Method The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. Results The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24 hours) are comparable to traditional manual experiments, while minimizing experimenter involvement. Comparison with Existing Methods The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ~$500US, making it affordable to a wide range of investigators. Conclusions This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. PMID:26703418
Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L; Page, Terry L; Bhuva, Bharat; Broadie, Kendal
2016-03-01
Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24h) are comparable to traditional manual experiments, while minimizing experimenter involvement. The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. Copyright © 2015 Elsevier B.V. All rights reserved.
VO for Education: Archive Prototype
NASA Astrophysics Data System (ADS)
Ramella, M.; Iafrate, G.; De Marco, M.; Molinaro, M.; Knapic, C.; Smareglia, R.; Cepparo, F.
2014-05-01
The number of remote control telescopes dedicated to education is increasing in many countries, leading to correspondingly larger and larger amount of stored educational data that are usually available only to local observers. Here we present the project for a new infrastructure that will allow teachers using educational telescopes to archive their data and easily publish them within the Virtual Observatory (VO) avoiding the complexity of professional tools. Students and teachers anywhere will be able to access these data with obvious benefits for the realization of grander scale collaborative projects. Educational VO data will also be an important resource for teachers not having direct access to any educational telescopes. We will use the educational telescope at our observatory in Trieste as a prototype for the future VO educational data archive resource. The publishing infrastructure will include: user authentication, content and curation validation, data validation and ingestion, VO compliant resource generation. All of these parts will be performed by means of server side applications accessible through a web graphical user interface (web GUI). Apart from user registration, that will be validated by a natural person responsible for the archive (after having verified the reliability of the user and inspected one or more test files), all the subsequent steps will be automated. This means that at the very first data submission through the webGUI, a complete resource including archive and published VO service will be generated, ready to be registered to the VO. The efforts required to the registered user will consist only in describing herself/himself at registration step and submitting the data she/he selects for publishing after each observation sessions. The infrastructure will be file format independent and the underlying data model will use a minimal set of standard VO keywords, some of which will be specific for outreach and education, possibly including VO field identification (astronomy, planetary science, solar physics). The VO published resource description will be suggested such as to allow selective access to educational data by VO aware tools, differentiating them from professional data while treating them with the same procedures, protocols and tools. The whole system will be very flexible, scalable and with the objective to leave as less work as possible to humans.
Publication of sensor data in the long-term environmental sub-observatory TERENO Northeast
NASA Astrophysics Data System (ADS)
Stender, Vivien; Ulbricht, Damian; Klump, Jens
2017-04-01
Terrestrial Environmental Observatories (TERENO) is an interdisciplinary and long-term research project spanning an Earth observation network across Germany. It includes four test sites within Germany from the North German lowlands to the Bavarian Alps and is operated by six research centers of the Helmholtz Association. TERENO Northeast is one of the sub-observatories of TERENO and is operated by the German Research Centre for Geosciences GFZ in Potsdam. This observatory investigates geoecological processes in the northeastern lowland of Germany by collecting large amounts of environmentally relevant data. The success of long-term projects like TERENO depends on well-organized data management, data exchange between the partners involved and on the availability of the captured data. Data discovery and dissemination are facilitated not only through data portals of the regional TERENO observatories but also through a common spatial data infrastructure TEODOOR (TEreno Online Data repOsitORry). TEODOOR bundles the data provided by the different web services of the single observatories and provides tools for data discovery, visualization and data access. The TERENO Northeast data infrastructure integrates data from more than 200 instruments and makes data available through standard web services. TEODOOR accesses the OGC Sensor Web Enablement (SWE) interfaces offered by the regional observatories. In addition to the SWE interface, TERENO Northeast also publishes time series of environmental sensor data through the DOI registration service at GFZ Potsdam. This service uses the DataCite infrastructure to make research data citable and is able to keep and disseminate metadata popular to the geosciences [1]. The metadata required by DataCite are created in an automated process by extracting information from the SWE SensorML metadata. The GFZ data management tool kit panMetaDocs is used to manage and archive file based datasets and to register Digital Object Identifiers (DOI) for published data. In this presentation we will report on current advances in publication of time series data from environmental sensor networks. [1]http://doidb.wdc-terra.org/oaip/oai?verb=ListRecords&metadataPrefix=iso19139&set=DOIDB.TERENO
Roles of airships in economic development
NASA Technical Reports Server (NTRS)
Beier, G. J.; Hidalgo, G. C.
1975-01-01
It is proposed that airships of known and tested technology could, in some cases, perform routine transport missions more economically than conventional transport modes. If infrastructure for direct surface transport is already in place or if such infrastructure can be justified by the size of the market and there are no unusual impediments to constructing it, then the airships of tested technology cannot normally compete. If, however, the surface routes would be unusually expensive or circuitous, or if they involve several transhipments, or if the market size is too small to spread infrastructure costs of conventional transport, the airships of tested technology present a workable alternative. A series of special cases are considered. The cases, though unusual, are not unique; there are several similar possible applications which, in total, would provide a reasonably large market for airships.
Software design for automated assembly of truss structures
NASA Technical Reports Server (NTRS)
Herstrom, Catherine L.; Grantham, Carolyn; Allen, Cheryl L.; Doggett, William R.; Will, Ralph W.
1992-01-01
Concern over the limited intravehicular activity time has increased the interest in performing in-space assembly and construction operations with automated robotic systems. A technique being considered at LaRC is a supervised-autonomy approach, which can be monitored by an Earth-based supervisor that intervenes only when the automated system encounters a problem. A test-bed to support evaluation of the hardware and software requirements for supervised-autonomy assembly methods was developed. This report describes the design of the software system necessary to support the assembly process. The software is hierarchical and supports both automated assembly operations and supervisor error-recovery procedures, including the capability to pause and reverse any operation. The software design serves as a model for the development of software for more sophisticated automated systems and as a test-bed for evaluation of new concepts and hardware components.
Automated power distribution system hardware. [for space station power supplies
NASA Technical Reports Server (NTRS)
Anderson, Paul M.; Martin, James A.; Thomason, Cindy
1989-01-01
An automated power distribution system testbed for the space station common modules has been developed. It incorporates automated control and monitoring of a utility-type power system. Automated power system switchgear, control and sensor hardware requirements, hardware design, test results, and potential applications are discussed. The system is designed so that the automated control and monitoring of the power system is compatible with both a 208-V, 20-kHz single-phase AC system and a high-voltage (120 to 150 V) DC system.
2011-10-01
Phoenix, and Vitek 2 systems). Discordant results were categorized as very major errors (VME), major errors (ME), and minor errors (mE). DNA sequences...01 OCT 2011 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Carbapenem Susceptibility Testing Errors Using Three Automated...FDA standards required for device approval (11). The Vitek 2 method was the only automated susceptibility method in our study that satisfied FDA
2011 Information Systems Summit 2 Held in Baltimore, Maryland on April 4-6, 2011
2011-04-04
to automate. Some criteria that should be considered: – Are the tests easy to automate? What makes a test easy to automate is the ability to script...ANSI-748-B defines 32 criteria needs for a FAR/DFAR compliant Earned Value Management System. These criteria address 5 areas of Earned Value...are the basis of Increasing the Probability of Success of any program. But there are 11 critical criteria that must be present not matter what
DOT National Transportation Integrated Search
2015-02-01
Through a coordinated effort among the electrical engineering research team of the Florida State : University (FSU) and key Florida Department of Transportation (FDOT) personnel, an NTCIP-based : automated testing system for NTCIP-compliant ASC has b...
Development and verification testing of automation and robotics for assembly of space structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1993-01-01
A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.
ERIC Educational Resources Information Center
Zhang, Mo
2013-01-01
Many testing programs use automated scoring to grade essays. One issue in automated essay scoring that has not been examined adequately is population invariance and its causes. The primary purpose of this study was to investigate the impact of sampling in model calibration on population invariance of automated scores. This study analyzed scores…
Hardware fault insertion and instrumentation system: Mechanization and validation
NASA Technical Reports Server (NTRS)
Benson, J. W.
1987-01-01
Automated test capability for extensive low-level hardware fault insertion testing is developed. The test capability is used to calibrate fault detection coverage and associated latency times as relevant to projecting overall system reliability. Described are modifications made to the NASA Ames Reconfigurable Flight Control System (RDFCS) Facility to fully automate the total test loop involving the Draper Laboratories' Fault Injector Unit. The automated capability provided included the application of sequences of simulated low-level hardware faults, the precise measurement of fault latency times, the identification of fault symptoms, and bulk storage of test case results. A PDP-11/60 served as a test coordinator, and a PDP-11/04 as an instrumentation device. The fault injector was controlled by applications test software in the PDP-11/60, rather than by manual commands from a terminal keyboard. The time base was especially developed for this application to use a variety of signal sources in the system simulator.
NASA Astrophysics Data System (ADS)
Hwang, L.; Kellogg, L. H.
2017-12-01
Curation of software promotes discoverability and accessibility and works hand in hand with scholarly citation to ascribe value to, and provide recognition for software development. To meet this challenge, the Computational Infrastructure for Geodynamics (CIG) maintains a community repository built on custom and open tools to promote discovery, access, identification, credit, and provenance of research software for the geodynamics community. CIG (geodynamics.org) originated from recognition of the tremendous effort required to develop sound software and the need to reduce duplication of effort and to sustain community codes. CIG curates software across 6 domains and has developed and follows software best practices that include establishing test cases, documentation, and a citable publication for each software package. CIG software landing web pages provide access to current and past releases; many are also accessible through the CIG community repository on github. CIG has now developed abc - attribution builder for citation to enable software users to give credit to software developers. abc uses zenodo as an archive and as the mechanism to obtain a unique identifier (DOI) for scientific software. To assemble the metadata, we searched the software's documentation and research publications and then requested the primary developers to verify. In this process, we have learned that each development community approaches software attribution differently. The metadata gathered is based on guidelines established by groups such as FORCE11 and OntoSoft. The rollout of abc is gradual as developers are forward-looking, rarely willing to go back and archive prior releases in zenodo. Going forward all actively developed packages will utilize the zenodo and github integration to automate the archival process when a new release is issued. How to handle legacy software, multi-authored libraries, and assigning roles to software remain open issues.
Next generation information communication infrastructure and case studies for future power systems
NASA Astrophysics Data System (ADS)
Qiu, Bin
As power industry enters the new century, powerful driving forces, uncertainties and new functions are compelling electric utilities to make dramatic changes in their information communication infrastructure. Expanding network services such as real time measurement and monitoring are also driving the need for more bandwidth in the communication network. These needs will grow further as new remote real-time protection and control applications become more feasible and pervasive. This dissertation addresses two main issues for the future power system information infrastructure: communication network infrastructure and associated power system applications. Optical networks no doubt will become the predominant data transmission media for next generation power system communication. The rapid development of fiber optic network technology poses new challenges in the areas of topology design, network management and real time applications. Based on advanced fiber optic technologies, an all-fiber network is investigated and proposed. The study will cover the system architecture and data exchange protocol aspects. High bandwidth, robust optical networks could provide great opportunities to the power system for better service and efficient operation. In the dissertation, different applications are investigated. One of the typical applications is the SCADA information accessing system. An Internet-based application for the substation automation system will be presented. VLSI (Very Large Scale Integration) technology is also used for one-line diagrams auto-generation. High transition rate and low latency optical network is especially suitable for power system real time control. In the dissertation, a new local area network based Load Shedding Controller (LSC) for isolated power system will be presented. By using PMU (Phasor Measurement Unit) and fiber optic network, an AGE (Area Generation Error) based accurate wide area load shedding scheme will also be proposed. The objective is to shed the load in the limited area with minimum disturbance.
NASA Technical Reports Server (NTRS)
Washburn, David A.; Rumbaugh, Duane M.
1992-01-01
Nonhuman primates provide useful models for studying a variety of medical, biological, and behavioral topics. Four years of joystick-based automated testing of monkeys using the Language Research Center's Computerized Test System (LRC-CTS) are examined to derive hints and principles for comparable testing with other species - including humans. The results of multiple parametric studies are reviewed, and reliability data are presented to reveal the surprises and pitfalls associated with video-task testing of performance.
Yamada, Marie; Yamada, Naotomo; Higashitani, Takanori; Ohta, Shoichiro; Sueoka, Eisaburo
2015-11-01
Laboratory testing prior to blood transfusion outside of regular hours in many hospitals and clinics is frequently conducted by technicians without sufficient experience in such testing work. To obtain consistent test results regardless of the degree of laboratory experience with blood transfusion testing, the number of facilities introducing automated equipment for testing prior to blood transfusion is increasing. Our hospital's blood transfusion department introduced fully automated test equipment in October of 2010 for use when blood transfusions are conducted outside of regular hours. However, excessive dependence on automated testing can lead to an inability to do manual blood typing or cross-match testing when necessitated by breakdowns in the automated test equipment, in the case of abnormal specimen reactions, or other such case. In addition, even outside of normal working hours there are more than a few instances in which transfusion must take place based on urgent communications from clinical staff, with the need for prompt and flexible timing of blood transfusion test and delivery of blood products. To address this situation, in 2010 we began training after-hours laboratory personnel in blood transfusion testing to provide practice using test tubes manually and to achieve greater understanding of blood transfusion test work (especially in cases of critical blood loss). Results of the training and difficulties in its implementation for such after-hours laboratory personnel at our hospital are presented and discussed in this paper. [Original
Pavement Technology and Airport Infrastructure Expansion Impact
NASA Astrophysics Data System (ADS)
Sabib; Setiawan, M. I.; Kurniasih, N.; Ahmar, A. S.; Hasyim, C.
2018-01-01
This research aims for analyzing construction and infrastructure development activities potential contribution towards Airport Performance. This research is correlation study with variable research that includes Airport Performance as X variable and construction and infrastructure development activities as Y variable. The population in this research is 148 airports in Indonesia. The sampling technique uses total sampling, which means 148 airports that becomes the population unit then all of it become samples. The results of coefficient correlation (R) test showed that construction and infrastructure development activities variable have a relatively strong relationship with Airport Performance variable, but the value of Adjusted R Square shows that an increase in the construction and infrastructure development activities is influenced by factor other than Airport Performance.
Automation to improve efficiency of field expedient injury prediction screening.
Teyhen, Deydre S; Shaffer, Scott W; Umlauf, Jon A; Akerman, Raymond J; Canada, John B; Butler, Robert J; Goffar, Stephen L; Walker, Michael J; Kiesel, Kyle B; Plisky, Phillip J
2012-07-01
Musculoskeletal injuries are a primary source of disability in the U.S. Military. Physical training and sports-related activities account for up to 90% of all injuries, and 80% of these injuries are considered overuse in nature. As a result, there is a need to develop an evidence-based musculoskeletal screen that can assist with injury prevention. The purpose of this study was to assess the capability of an automated system to improve the efficiency of field expedient tests that may help predict injury risk and provide corrective strategies for deficits identified. The field expedient tests include survey questions and measures of movement quality, balance, trunk stability, power, mobility, and foot structure and mobility. Data entry for these tests was automated using handheld computers, barcode scanning, and netbook computers. An automated algorithm for injury risk stratification and mitigation techniques was run on a server computer. Without automation support, subjects were assessed in 84.5 ± 9.1 minutes per subject compared with 66.8 ± 6.1 minutes per subject with automation and 47.1 ± 5.2 minutes per subject with automation and process improvement measures (p < 0.001). The average time to manually enter the data was 22.2 ± 7.4 minutes per subject. An additional 11.5 ± 2.5 minutes per subject was required to manually assign an intervention strategy. Automation of this injury prevention screening protocol using handheld devices and netbook computers allowed for real-time data entry and enhanced the efficiency of injury screening, risk stratification, and prescription of a risk mitigation strategy.
Enabling Smart Manufacturing Research and Development using a Product Lifecycle Test Bed.
Helu, Moneer; Hedberg, Thomas
2015-01-01
Smart manufacturing technologies require a cyber-physical infrastructure to collect and analyze data and information across the manufacturing enterprise. This paper describes a concept for a product lifecycle test bed built on a cyber-physical infrastructure that enables smart manufacturing research and development. The test bed consists of a Computer-Aided Technologies (CAx) Lab and a Manufacturing Lab that interface through the product model creating a "digital thread" of information across the product lifecycle. The proposed structure and architecture of the test bed is presented, which highlights the challenges and requirements of implementing a cyber-physical infrastructure for manufacturing. The novel integration of systems across the product lifecycle also helps identify the technologies and standards needed to enable interoperability between design, fabrication, and inspection. Potential research opportunities enabled by the test bed are also discussed, such as providing publicly accessible CAx and manufacturing reference data, virtual factory data, and a representative industrial environment for creating, prototyping, and validating smart manufacturing technologies.
Enabling Smart Manufacturing Research and Development using a Product Lifecycle Test Bed
Helu, Moneer; Hedberg, Thomas
2017-01-01
Smart manufacturing technologies require a cyber-physical infrastructure to collect and analyze data and information across the manufacturing enterprise. This paper describes a concept for a product lifecycle test bed built on a cyber-physical infrastructure that enables smart manufacturing research and development. The test bed consists of a Computer-Aided Technologies (CAx) Lab and a Manufacturing Lab that interface through the product model creating a “digital thread” of information across the product lifecycle. The proposed structure and architecture of the test bed is presented, which highlights the challenges and requirements of implementing a cyber-physical infrastructure for manufacturing. The novel integration of systems across the product lifecycle also helps identify the technologies and standards needed to enable interoperability between design, fabrication, and inspection. Potential research opportunities enabled by the test bed are also discussed, such as providing publicly accessible CAx and manufacturing reference data, virtual factory data, and a representative industrial environment for creating, prototyping, and validating smart manufacturing technologies. PMID:28664167
Williams, James A; Eddleman, Laura; Pantone, Amy; Martinez, Regina; Young, Stephen; Van Der Pol, Barbara
2014-08-01
Next-generation diagnostics for Chlamydia trachomatis and Neisseria gonorrhoeae are available on semi- or fully-automated platforms. These systems require less hands-on time than older platforms and are user friendly. Four automated systems, the ABBOTT m2000 system, Becton Dickinson Viper System with XTR Technology, Gen-Probe Tigris DTS system, and Roche cobas 4800 system, were evaluated for total run time, hands-on time, and walk-away time. All of the systems evaluated in this time-motion study were able to complete a diagnostic test run within an 8-h work shift, instrument setup and operation were straightforward and uncomplicated, and walk-away time ranged from approximately 90 to 270 min in a head-to-head comparison of each system. All of the automated systems provide technical staff with increased time to perform other tasks during the run, offer easy expansion of the diagnostic test menu, and have the ability to increase specimen throughput. © 2013 Society for Laboratory Automation and Screening.
Development of autonomous vehicles’ testing system
NASA Astrophysics Data System (ADS)
Ivanov, A. M.; Shadrin, S. S.
2018-02-01
This article describes overview of automated and, in perspective, autonomous vehicles’ (AV) implementation risks. Set of activities, actual before the use of AVs on public roads, minimizing negative technical and social problems of AVs’ implementation is presented. Classification of vehicle’s automated control systems operating conditions is formulated. Groups of tests for AVs are developed and justified, sequence of AVs’ testing system formation is proposed.
Policy Brief: What is the Legal Framework for Automated Vehicles in Texas?
DOT National Transportation Integrated Search
2017-11-01
During the 85th Texas Legislature in 2017, Texas enacted a law related to automated vehicles. The bill, SB 2205, creates the legal framework for automated vehicle operation and testing in Texas. Although this law addresses a number of issues that can...
Polonchuk, Liudmila
2012-01-01
The Patchliner® temperature-controlled automated patch clamp system was evaluated for testing drug effects on potassium currents through human ether-à-go-go related gene (hERG) channels expressed in Chinese hamster ovary cells at 35–37°C. IC50 values for a set of reference drugs were compared with those obtained using the conventional voltage clamp technique. The results showed good correlation between the data obtained using automated and conventional electrophysiology. Based on these results, the Patchliner® represents an innovative automated electrophysiology platform for conducting the hERG assay that substantially increases throughput and has the advantage of operating at physiological temperature. It allows fast, accurate, and direct assessment of channel function to identify potential proarrhythmic side effects and sets a new standard in ion channel research for drug safety testing. PMID:22303293
Zwaenepoel, Karen; Merkle, Dennis; Cabillic, Florian; Berg, Erica; Belaud-Rotureau, Marc-Antoine; Grazioli, Vittorio; Herelle, Olga; Hummel, Michael; Le Calve, Michele; Lenze, Dido; Mende, Stefanie; Pauwels, Patrick; Quilichini, Benoit; Repetti, Elena
2015-02-01
In the past several years we have observed a significant increase in our understanding of molecular mechanisms that drive lung cancer. Specifically in the non-small cell lung cancer sub-types, ALK gene rearrangements represent a sub-group of tumors that are targetable by the tyrosine kinase inhibitor Crizotinib, resulting in significant reductions in tumor burden. Phase II and III clinical trials were performed using an ALK break-apart FISH probe kit, making FISH the gold standard for identifying ALK rearrangements in patients. FISH is often considered a labor and cost intensive molecular technique, and in this study we aimed to demonstrate feasibility for automation of ALK FISH testing, to improve laboratory workflow and ease of testing. This involved automation of the pre-treatment steps of the ALK assay using various protocols on the VP 2000 instrument, and facilitating automated scanning of the fluorescent FISH specimens for simplified enumeration on various backend scanning and analysis systems. The results indicated that ALK FISH can be automated. Significantly, both the Ikoniscope and BioView system of automated FISH scanning and analysis systems provided a robust analysis algorithm to define ALK rearrangements. In addition, the BioView system facilitated consultation of difficult cases via the internet. Copyright © 2015 Elsevier Inc. All rights reserved.