Sample records for configuration monitoring tool

  1. Configuration Management and Infrastructure Monitoring Using CFEngine and Icinga for Real-time Heterogeneous Data Taking Environment

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.

  2. The evolution of monitoring system: the INFN-CNAF case study

    NASA Astrophysics Data System (ADS)

    Bovina, Stefano; Michelotto, Diego

    2017-10-01

    Over the past two years, the operations at CNAF, the ICT center of the Italian Institute for Nuclear Physics, have undergone significant changes. The adoption of configuration management tools, such as Puppet, and the constant increase of dynamic and cloud infrastructures have led us to investigate a new monitoring approach. The present work deals with the centralization of the monitoring service at CNAF through a scalable and highly configurable monitoring infrastructure. The selection of tools has been made taking into account the following requirements given by users: (I) adaptability to dynamic infrastructures, (II) ease of configuration and maintenance, capability to provide more flexibility, (III) compatibility with existing monitoring system, (IV) re-usability and ease of access to information and data. In the paper, the CNAF monitoring infrastructure and its related components are hereafter described: Sensu as monitoring router, InfluxDB as time series database to store data gathered from sensors, Uchiwa as monitoring dashboard and Grafana as a tool to create dashboards and to visualize time series metrics.

  3. Next Generation Monitoring: Tier 2 Experience

    NASA Astrophysics Data System (ADS)

    Fay, R.; Bland, J.; Jones, S.

    2017-10-01

    Monitoring IT infrastructure is essential for maximizing availability and minimizing disruption by detecting failures and developing issues. The HEP group at Liverpool have recently updated our monitoring infrastructure with the goal of increasing coverage, improving visualization capabilities, and streamlining configuration and maintenance. Here we present a summary of Liverpool’s experience, the monitoring infrastructure, and the tools used to build it. In brief, system checks are configured in Puppet using Hiera, and managed by Sensu, replacing Nagios. Centralised logging is managed with Elasticsearch, together with Logstash and Filebeat. Kibana provides an interface for interactive analysis, including visualization and dashboards. Metric collection is also configured in Puppet, managed by collectd and stored in Graphite, with Grafana providing a visualization and dashboard tool. The Uchiwa dashboard for Sensu provides a web interface for viewing infrastructure status. Alert capabilities are provided via external handlers. A custom alert handler is in development to provide an easily configurable, extensible and maintainable alert facility.

  4. Alarms Philosophy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Karen S; Kasemir, Kay

    2009-01-01

    An effective alarm system consists of a mechanism to monitor control points and generate alarm notifications, tools for operators to view, hear, acknowledge and handle alarms and a good configuration. Despite the availability of numerous fully featured tools, accelerator alarm systems continue to be disappointing to operations, frequently to the point of alarms being permanently silenced or totally ignored. This is often due to configurations that produce an excessive number of alarms or fail to communicate the required operator response. Most accelerator controls systems do a good job of monitoring specified points and generating notifications when parameters exceed predefined limits.more » In some cases, improved tools can help, but more often, poor configuration is the root cause of ineffective alarm systems. A SNS, we have invested considerable effort in generating appropriate configurations using a rigorous set of rules based on best practices in the industrial process controls community. This paper will discuss our alarm configuration philosophy and operator response to our new system.« less

  5. Tools to manage the enterprise-wide picture archiving and communications system environment.

    PubMed

    Lannum, L M; Gumpf, S; Piraino, D

    2001-06-01

    The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.

  6. The “NetBoard”: Network Monitoring Tools Integration for INFN Tier-1 Data Center

    NASA Astrophysics Data System (ADS)

    De Girolamo, D.; dell'Agnello and, L.; Zani, S.

    2012-12-01

    The monitoring and alert system is fundamental for the management and the operation of the network in a large data center such as an LHC Tier-1. The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management and monitoring several tools have been adopted and different sensors have been developed. In this paper, after an overview on the different aspects to be monitored and the tools used for them (i.e. MRTG, Nagios, Arpwatch, NetFlow, Syslog, etc), we will describe the “NetBoard”, a monitoring toolkit developed at the INFN Tier-1. NetBoard, developed for a multi-vendor network, is able to install and auto-configure all tools needed for its monitoring, either via network devices discovery mechanism or via configuration file or via wizard. In this way, we are also able to activate different types of sensors and Nagios checks according to the equipment vendor specifications. Moreover, when a new device is connected in the LAN, NetBoard can detect where it is plugged. Finally the NetBoard web interface allows to have the overall status of the entire network “at a glance”, both the local and the geographical (including the LHCOPN and the LHCONE) link utilization, health status of network devices (with active alerts) and flow analysis.

  7. IPv6 testing and deployment at Prague Tier 2

    NASA Astrophysics Data System (ADS)

    Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ

    2012-12-01

    Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.

  8. Centralized Fabric Management Using Puppet, Git, and GLPI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William

    2012-12-01

    Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).

  9. Continuous Security and Configuration Monitoring of HPC Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less

  10. Development of an automated on-line pepsin digestion-liquid chromatography-tandem mass spectrometry configuration for the rapid analysis of protein adducts of chemical warfare agents.

    PubMed

    Carol-Visser, Jeroen; van der Schans, Marcel; Fidder, Alex; Hulst, Albert G; van Baar, Ben L M; Irth, Hubertus; Noort, Daan

    2008-07-01

    Rapid monitoring and retrospective verification are key issues in protection against and non-proliferation of chemical warfare agents (CWA). Such monitoring and verification are adequately accomplished by the analysis of persistent protein adducts of these agents. Liquid chromatography-mass spectrometry (LC-MS) is the tool of choice in the analysis of such protein adducts, but the overall experimental procedure is quite elaborate. Therefore, an automated on-line pepsin digestion-LC-MS configuration has been developed for the rapid determination of CWA protein adducts. The utility of this configuration is demonstrated by the analysis of specific adducts of sarin and sulfur mustard to human butyryl cholinesterase and human serum albumin, respectively.

  11. Designs for Risk Evaluation and Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Designs for Risk Evaluation and Management (DREAM) tool was developed as part of the effort to quantify the risk of geologic storage of carbon dioxide (CO 2) under the U.S. Department of Energy's National Risk Assessment Partnership (NRAP). DREAM is an optimization tool created to identify optimal monitoring schemes that minimize the time to first detection of CO 2 leakage from a subsurface storage formation. DREAM acts as a post-processer on user-provided output from subsurface leakage simulations. While DREAM was developed for CO 2 leakage scenarios, it is applicable to any subsurface leakage simulation of the same output format.more » The DREAM tool is comprised of three main components: (1) a Java wizard used to configure and execute the simulations, (2) a visualization tool to view the domain space and optimization results, and (3) a plotting tool used to analyze the results. A secondary Java application is provided to aid users in converting common American Standard Code for Information Interchange (ASCII) output data to the standard DREAM hierarchical data format (HDF5). DREAM employs a simulated annealing approach that searches the solution space by iteratively mutating potential monitoring schemes built of various configurations of monitoring locations and leak detection parameters. This approach has proven to be orders of magnitude faster than an exhaustive search of the entire solution space. The user's manual illustrates the program graphical user interface (GUI), describes the tool inputs, and includes an example application.« less

  12. Energy Sector Security through a System for Intelligent, Learning Network Configuration Monitoring and Management (“Essence”)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Craig; Larmouth, Robert

    The project was conceived and executed with the overarching objective to provide cost effective tools to cooperative utilities that enabled them to quickly detect, characterize and take remediative action against cyber attacks.

  13. Information Assurance Technology Analysis Center Information Assurance Tools Report Intrusion Detection

    DTIC Science & Technology

    1998-01-01

    such as central processing unit (CPU) usage, disk input/output (I/O), memory usage, user activity, and number of logins attempted. The statistics... EMERALD Commercial anomaly detection, system monitoring SRI porras@csl.sri.com www.csl.sri.com/ emerald /index. html Gabriel Commercial system...sensors, it starts to protect the network with minimal configuration and maximum intelligence. T 11 EMERALD TITLE EMERALD (Event Monitoring

  14. Remote console for virtual telerehabilitation.

    PubMed

    Lewis, Jeffrey A; Boian, Rares F; Burdea, Grigore; Deutsch, Judith E

    2005-01-01

    The Remote Console (ReCon) telerehabilitation system provides a platform for therapists to guide rehabilitation sessions from a remote location. The ReCon system integrates real-time graphics, audio/video communication, private therapist chat, post-test data graphs, extendable patient and exercise performance monitoring, exercise pre-configuration and modification under a single application. These tools give therapists the ability to conduct training, monitoring/assessment, and therapeutic intervention remotely and in real-time.

  15. Data Auditor: Analyzing Data Quality Using Pattern Tableaux

    NASA Astrophysics Data System (ADS)

    Srivastava, Divesh

    Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.

  16. Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks

    DOE PAGES

    Vollmer, Todd; Manic, Milos

    2014-05-01

    A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less

  17. Monitoring Evolution at CERN

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Fiorini, B.; Murphy, S.; Pigueiras, L.; Santos, M.

    2015-12-01

    Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud provisioning and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over 11000 multi-core processor servers, 130 PB disk servers, 100 PB tape robots, and 150 high performance tape drives. To cope with these developments, an evolution of the data centre monitoring tools was also required. This modernisation was based on a number of guiding rules: sustain the increase of resources, adapt to the new dynamic nature of the data centres, make monitoring data easier to share, give more flexibility to Service Managers on how they publish and consume monitoring metrics and logs, establish a common repository of monitoring data, optimise the handling of monitoring notifications, and replace the previous toolset by new open source technologies with large adoption and community support. This contribution describes how these improvements were delivered, present the architecture and technologies of the new monitoring tools, and review the experience of its production deployment.

  18. Human-In-The-Loop Investigation of Interoperability Between Terminal Sequencing and Spacing, Automated Terminal Proximity Alert, and Wake-Separation Recategorization

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Bienert, Nancy; Borade, Abhay; Gabriel, Conrad; Gujral, Vimmy; Jobe, Kim; Martin, Lynne; Omar, Faisal; Prevot, Thomas; Mercer, Joey

    2016-01-01

    A human-in-the-loop simulation study addressed terminal-area controller-workstation interface variations for interoperability between three new capabilities being introduced by the FAA. The capabilities are Terminal Sequencing and Spacing (TSAS), Automated Terminal Proximity Alert (ATPA), and wake-separation recategorization, or 'RECAT.' TSAS provides controllers with Controller-Managed Spacing (CMS) tools, including slot markers, speed advisories, and early/late indications, together with runway assignments and sequence numbers. ATPA provides automatic monitor, warning, and alert cones to inform controllers about spacing between aircraft on approach. ATPA cones are sized according to RECAT, an improved method of specifying wake-separation standards. The objective of the study was to identify potential issues and provide recommendations for integrating TSAS with ATPA and RECAT. Participants controlled arrival traffic under seven different display configurations, then tested an 'exploratory' configuration developed with participant input. All the display conditions were workable and acceptable, but controllers strongly preferred having the CMS tools available on Feeder positions, and both CMS tools and ATPA available on Final positions. Controllers found the integrated systems favorable and liked being able to tailor configurations to individual preferences.

  19. Scalable Integrated Multi-Mission Support System Simulator Release 3.0

    NASA Technical Reports Server (NTRS)

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.

  20. Rig Diagnostic Tools

    NASA Technical Reports Server (NTRS)

    Soileau, Kerry M.; Baicy, John W.

    2008-01-01

    Rig Diagnostic Tools is a suite of applications designed to allow an operator to monitor the status and health of complex networked systems using a unique interface between Java applications and UNIX scripts. The suite consists of Java applications, C scripts, Vx- Works applications, UNIX utilities, C programs, and configuration files. The UNIX scripts retrieve data from the system and write them to a certain set of files. The Java side monitors these files and presents the data in user-friendly formats for operators to use in making troubleshooting decisions. This design allows for rapid prototyping and expansion of higher-level displays without affecting the basic data-gathering applications. The suite is designed to be extensible, with the ability to add new system components in building block fashion without affecting existing system applications. This allows for monitoring of complex systems for which unplanned shutdown time comes at a prohibitive cost.

  1. Easily configured real-time CPOE Pick Off Tool supporting focused clinical research and quality improvement.

    PubMed

    Rosenbaum, Benjamin P; Silkin, Nikolay; Miller, Randolph A

    2014-01-01

    Real-time alerting systems typically warn providers about abnormal laboratory results or medication interactions. For more complex tasks, institutions create site-wide 'data warehouses' to support quality audits and longitudinal research. Sophisticated systems like i2b2 or Stanford's STRIDE utilize data warehouses to identify cohorts for research and quality monitoring. However, substantial resources are required to install and maintain such systems. For more modest goals, an organization desiring merely to identify patients with 'isolation' orders, or to determine patients' eligibility for clinical trials, may adopt a simpler, limited approach based on processing the output of one clinical system, and not a data warehouse. We describe a limited, order-entry-based, real-time 'pick off' tool, utilizing public domain software (PHP, MySQL). Through a web interface the tool assists users in constructing complex order-related queries and auto-generates corresponding database queries that can be executed at recurring intervals. We describe successful application of the tool for research and quality monitoring.

  2. Unified Geophysical Cloud Platform (UGCP) for Seismic Monitoring and other Geophysical Applications.

    NASA Astrophysics Data System (ADS)

    Synytsky, R.; Starovoit, Y. O.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.

    2016-12-01

    We present Unified Geophysical Cloud Platform (UGCP) or UniGeoCloud as an innovative approach for geophysical data processing in the Cloud environment with the ability to run any type of data processing software in isolated environment within the single Cloud platform. We've developed a simple and quick method of several open-source widely known software seismic packages (SeisComp3, Earthworm, Geotool, MSNoise) installation which does not require knowledge of system administration, configuration, OS compatibility issues etc. and other often annoying details preventing time wasting for system configuration work. Installation process is simplified as "mouse click" on selected software package from the Cloud market place. The main objective of the developed capability was the software tools conception with which users are able to design and install quickly their own highly reliable and highly available virtual IT-infrastructure for the organization of seismic (and in future other geophysical) data processing for either research or monitoring purposes. These tools provide access to any seismic station data available in open IP configuration from the different networks affiliated with different Institutions and Organizations. It allows also setting up your own network as you desire by selecting either regionally deployed stations or the worldwide global network based on stations selection form the global map. The processing software and products and research results could be easily monitored from everywhere using variety of user's devices form desk top computers to IT gadgets. Currents efforts of the development team are directed to achieve Scalability, Reliability and Sustainability (SRS) of proposed solutions allowing any user to run their applications with the confidence of no data loss and no failure of the monitoring or research software components. The system is suitable for quick rollout of NDC-in-Box software package developed for State Signatories and aimed for promotion of data processing collected by the IMS Network.

  3. XTCE (XML Telemetric and Command Exchange) Standard Making It Work at NASA. Can It Work For You?

    NASA Technical Reports Server (NTRS)

    Munoz-Fernandez, Michela; Smith, Danford S.; Rice, James K.; Jones, Ronald A.

    2017-01-01

    The XML Telemetric and Command Exchange (XTCE) standard is intended as a way to describe telemetry and command databases to be exchanged across centers and space agencies. XTCE usage has the potential to lead to consolidation of the Mission Operations Center (MOC) Monitor and Control displays for mission cross-support, reducing equipment and configuration costs, as well as a decrease in the turnaround time for telemetry and command modifications during all the mission phases. The adoption of XTCE will reduce software maintenance costs by reducing the variation between our existing mission dictionaries. The main objective of this poster is to show how powerful XTCE is in terms of interoperability across centers and missions. We will provide results for a use case where two centers can use their local tools to process and display the same mission telemetry in their MOC independently of one another. In our use case we have first quantified the ability for XTCE to capture the telemetry definitions of the mission by use of our suite of support tools (Conversion, Validation, and Compliance measurement). The next step was to show processing and monitoring of the same telemetry in two mission centers. Once the database was converted to XTCE using our tool, the XTCE file became our primary database and was shared among the various tool chains through their XTCE importers and ultimately configured to ingest the telemetry stream and display or capture the telemetered information in similar ways.Summary results include the ability to take a real mission database and real mission telemetry and display them on various tools from two centers, as well as using commercially free COTS.

  4. Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.

    2010-01-01

    The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.

  5. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less

  6. Wireless device monitoring methods, wireless device monitoring systems, and articles of manufacture

    DOEpatents

    McCown, Steven H [Rigby, ID; Derr, Kurt W [Idaho Falls, ID; Rohde, Kenneth W [Idaho Falls, ID

    2012-05-08

    Wireless device monitoring methods, wireless device monitoring systems, and articles of manufacture are described. According to one embodiment, a wireless device monitoring method includes accessing device configuration information of a wireless device present at a secure area, wherein the device configuration information comprises information regarding a configuration of the wireless device, accessing stored information corresponding to the wireless device, wherein the stored information comprises information regarding the configuration of the wireless device, comparing the device configuration information with the stored information, and indicating the wireless device as one of authorized and unauthorized for presence at the secure area using the comparing.

  7. Let your fingers do the walking: The projects most invaluable tool

    NASA Technical Reports Server (NTRS)

    Zirk, Deborah A.

    1993-01-01

    The barrage of information pertaining to the software being developed for a project can be overwhelming. Current status information, as well as the statistics and history of software releases, should be 'at the fingertips' of project management and key technical personnel. This paper discusses the development, configuration, capabilities, and operation of a relational database, the System Engineering Database (SEDB) which was designed to assist management in monitoring of the tasks performed by the Network Control Center (NCC) Project. This database has proven to be an invaluable project tool and is utilized daily to support all project personnel.

  8. Application of structural health monitoring technologies to bio-systems: current status and path forward

    NASA Astrophysics Data System (ADS)

    Bhalla, Suresh; Srivastava, Shashank; Suresh, Rupali; Moharana, Sumedha; Kaur, Naveet; Gupta, Ashok

    2015-03-01

    This paper presents a case for extension of structural health monitoring (SHM) technologies to offer solutions for biomedical problems. SHM research has made remarkable progress during the last two/ three decades. These technologies are now being extended for possible applications in the bio-medical field. Especially, smart materials, such as piezoelectric ceramic (PZT) patches and fibre-Bragg grating (FBG) sensors, offer a new set of possibilities to the bio-medical community to augment their conventional set of sensors, tools and equipment. The paper presents some of the recent extensions of SHM, such as condition monitoring of bones, monitoring of dental implant post surgery and foot pressure measurement. Latest developments, such as non-bonded configuration of PZT patches for monitoring bones and possible applications in osteoporosis detection, are also discussed. In essence, there is a whole new gamut of new possibilities for SHM technologies making their foray into the bi-medical sector.

  9. Portable spark-gap arc generator

    NASA Technical Reports Server (NTRS)

    Ignaczak, L. R.

    1978-01-01

    Self-contained spark generator that simulates electrical noise caused by discharge of static charge is useful tool when checking sensitive component and equipment. In test set-up, device introduces repeatable noise pulses as behavior of components is monitored. Generator uses only standard commercial parts and weighs only 4 pounds; portable dc power supply is used. Two configurations of generator have been developed: one is free-running arc source, and one delivers spark in response to triggering pulse.

  10. Experiences running NASTRAN on the Microvax 2 computer

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.; Mitchell, Reginald S.

    1987-01-01

    The MicroVAX operates NASTRAN so well that the only detectable difference in its operation compared to an 11/780 VAX is in the execution time. On the modest installation described here, the engineer has all of the tools he needs to do an excellent job of analysis. System configuration decisions, system sizing, preparation of the system disk, definition of user quotas, installation, monitoring of system errors, and operation policies are discussed.

  11. NetMOD v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J

    2015-12-22

    NetMOD is a tool to model the performance of global ground-based explosion monitoring systems. The version 2.0 of the software supports the simulation of seismic, hydroacoustic, and infrasonic detection capability. The tool provides a user interface to execute simulations based upon a hypothetical definition of the monitoring system configuration, geophysical properties of the Earth, and detection analysis criteria. NetMOD will be distributed with a project file defining the basic performance characteristics of the International Monitoring System (IMS), a network of sensors operated by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Network modeling is needed to be able to assess and explainmore » the potential effect of changes to the IMS, to prioritize station deployment and repair, and to assess the overall CTBTO monitoring capability currently and in the future. Currently the CTBTO uses version 1.0 of NetMOD, provided to them in early 2014. NetMOD will provide a modern tool that will cover all the simulations currently available and allow for the development of additional simulation capabilities of the IMS in the future. NetMOD simulates the performance of monitoring networks by estimating the relative amplitudes of the signal and noise measured at each of the stations within the network based upon known geophysical principles. From these signal and noise estimates, a probability of detection may be determined for each of the stations. The detection probabilities at each of the stations may then be combined to produce an estimate of the detection probability for the entire monitoring network.« less

  12. An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor

    2015-12-01

    The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.

  13. CMS Configuration Editor: GUI based application for user analysis job

    NASA Astrophysics Data System (ADS)

    de Cosa, A.

    2011-12-01

    We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.

  14. Analyzing data flows of WLCG jobs at batch job level

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  15. OpenROCS: a software tool to control robotic observatories

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Sanz, Josep; Vilardell, Francesc; Ribas, Ignasi; Gil, Pere

    2012-09-01

    We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to implement responses to the system events that appear in the routine and non-routine operations associated to data-flow and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted to different observatory configurations and event-action specifications. It is based on an abstract model that is independent of the specific hardware or software and is highly configurable. Interfaces to the system components are defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image processing and data quality control. We provide two examples of how it is used as the core element of the control system in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).

  16. Sensitivity study and parameter optimization of OCD tool for 14nm finFET process

    NASA Astrophysics Data System (ADS)

    Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping

    2016-03-01

    Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.

  17. Benefits Assessment for Single-Airport Tactical Runway Configuration Management Tool (TRCM)

    NASA Technical Reports Server (NTRS)

    Oseguera-Lohr, Rosa; Phojanamonogkolkij, Nipa; Lohr, Gary W.

    2015-01-01

    The System-Oriented Runway Management (SORM) concept was developed as part of the Airspace Systems Program (ASP) Concepts and Technology Development (CTD) Project, and is composed of two basic capabilities: Runway Configuration Management (RCM), and Combined Arrival/Departure Runway Scheduling (CADRS). RCM is the process of designating active runways, monitoring the active runway configuration for suitability given existing factors, and predicting future configuration changes; CADRS is the process of distributing arrivals and departures across active runways based on local airport and National Airspace System (NAS) goals. The central component in the SORM concept is a tool for taking into account all the various factors and producing a recommendation for what would be the optimal runway configuration, runway use strategy, and aircraft sequence, considering as many of the relevant factors required in making this type of decision, and user preferences, if feasible. Three separate tools were initially envisioned for this research area, corresponding to the time scale in which they would operate: Strategic RCM (SRCM), with a planning horizon on the order of several hours, Tactical RCM (TRCM), with a planning horizon on the order of 90 minutes, and CADRS, with a planning horizon on the order of 15-30 minutes[1]. Algorithm development was initiated in all three of these areas, but the most fully developed to date is the TRCM algorithm. Earlier studies took a high-level approach to benefits, estimating aggregate benefits across most of the major airports in the National Airspace Systems (NAS), for both RCM and CADRS [2]. Other studies estimated the benefit of RCM and CADRS using various methods of re-sequencing arrivals to reduce delays3,4, or better balancing of arrival fixes5,6. Additional studies looked at different methods for performing the optimization involved in selecting the best Runway Configuration Plan (RCP) to use7-10. Most of these previous studies were high-level or generic in nature (not focusing on specific airports), and benefits were aggregated for the entire NAS, with relatively low fidelity simulation of SORM functions and aircraft trajectories. For SORM research, a more detailed benefits assessment of RCM and CADRS for specific airports or metroplexes is needed.

  18. Aviation Safety Simulation Model

    NASA Technical Reports Server (NTRS)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  19. A leading edge heating array and a flat surface heating array: Final design. [for testing the thermal protection system of the space shuttle

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A heating array is described for testing full-scale sections of the leading edge and lower fuselage surfaces of the shuttle. The heating array was designed to provide a tool for development and acceptance testing of leading edge segments and large flat sections of the main body thermal protection system. The array was designed using a variable length module concept to meet test requirements using interchangeable components from one test configuration in another configuration. Heat generating modules and heat absorbing modules were employed to achieve the thermal gradient around the leading edge. A support was developed to hold the modules to form an envelope around a variety of leading edges; to supply coolant to each module; the support structure and to hold the modules in the flat surface heater configuration. An optical pyrometer system mounted within the array was designed to monitor specimen surface temperatures without altering the test article's surface.

  20. NADIR: A Flexible Archiving System Current Development

    NASA Astrophysics Data System (ADS)

    Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.

    2014-05-01

    The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.

  1. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Cooper, Robert; Wood, Mark; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  2. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark; Cooper, Robert; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system is described: a collection of tools for constructing distributed application management software. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real time reactive program. The underlying application is instrumented with a variety of built-in and user defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when pre-existing, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  3. Knowledge From Pictures (KFP)

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Paterra, Frank; Bailin, Sidney

    1993-01-01

    The old maxim goes: 'A picture is worth a thousand words'. The objective of the research reported in this paper is to demonstrate this idea as it relates to the knowledge acquisition process and the automated development of an expert system's rule base. A prototype tool, the Knowledge From Pictures (KFP) tool, has been developed which configures an expert system's rule base by an automated analysis of and reasoning about a 'picture', i.e., a graphical representation of some target system to be supported by the diagnostic capabilities of the expert system under development. This rule base, when refined, could then be used by the expert system for target system monitoring and fault analysis in an operational setting. Most people, when faced with the problem of understanding the behavior of a complicated system, resort to the use of some picture or graphical representation of the system as an aid in thinking about it. This depiction provides a means of helping the individual to visualize the bahavior and dynamics of the system under study. An analysis of the picture augmented with the individual's background information, allows the problem solver to codify knowledge about the system. This knowledge can, in turn, be used to develop computer programs to automatically monitor the system's performance. The approach taken is this research was to mimic this knowledge acquisition paradigm. A prototype tool was developed which provides the user: (1) a mechanism for graphically representing sample system-configurations appropriate for the domain, and (2) a linguistic device for annotating the graphical representation with the behaviors and mutual influences of the components depicted in the graphic. The KFP tool, reasoning from the graphical depiction along with user-supplied annotations of component behaviors and inter-component influences, generates a rule base that could be used in automating the fault detection, isolation, and repair of the system.

  4. Telescience Support Center Data System Software

    NASA Technical Reports Server (NTRS)

    Rahman, Hasan

    2010-01-01

    The Telescience Support Center (TSC) team has developed a databasedriven, increment-specific Data Require - ment Document (DRD) generation tool that automates much of the work required for generating and formatting the DRD. It creates a database to load the required changes to configure the TSC data system, thus eliminating a substantial amount of labor in database entry and formatting. The TSC database contains the TSC systems configuration, along with the experimental data, in which human physiological data must be de-commutated in real time. The data for each experiment also must be cataloged and archived for future retrieval. TSC software provides tools and resources for ground operation and data distribution to remote users consisting of PIs (principal investigators), bio-medical engineers, scientists, engineers, payload specialists, and computer scientists. Operations support is provided for computer systems access, detailed networking, and mathematical and computational problems of the International Space Station telemetry data. User training is provided for on-site staff and biomedical researchers and other remote personnel in the usage of the space-bound services via the Internet, which enables significant resource savings for the physical facility along with the time savings versus traveling to NASA sites. The software used in support of the TSC could easily be adapted to other Control Center applications. This would include not only other NASA payload monitoring facilities, but also other types of control activities, such as monitoring and control of the electric grid, chemical, or nuclear plant processes, air traffic control, and the like.

  5. AmWeb: a novel interactive web tool for antimicrobial resistance surveillance, applicable to both community and hospital patients.

    PubMed

    Ironmonger, Dean; Edeghere, Obaghe; Gossain, Savita; Bains, Amardeep; Hawkey, Peter M

    2013-10-01

    Antimicrobial resistance (AMR) is recognized as one of the most significant threats to human health. Local and regional AMR surveillance enables the monitoring of temporal changes in susceptibility to antibiotics and can provide prescribing guidance to healthcare providers to improve patient management and help slow the spread of antibiotic resistance in the community. There is currently a paucity of routine community-level AMR surveillance information. The HPA in England sponsored the development of an AMR surveillance system (AmSurv) to collate local laboratory reports. In the West Midlands region of England, routine reporting of AMR data has been established via the AmSurv system from all diagnostic microbiology laboratories. The HPA Regional Epidemiology Unit developed a web-enabled database application (AmWeb) to provide microbiologists, pharmacists and other stakeholders with timely access to AMR data using user-configurable reporting tools. AmWeb was launched in the West Midlands in January 2012 and is used by microbiologists and pharmacists to monitor resistance profiles, perform local benchmarking and compile data for infection control reports. AmWeb is now being rolled out to all English regions. It is expected that AmWeb will become a valuable tool for monitoring the threat from newly emerging or currently circulating resistant organisms and helping antibiotic prescribers to select the best treatment options for their patients.

  6. Database usage and performance for the Fermilab Run II experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonham, D.; Box, D.; Gallas, E.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less

  7. Interaction mining and skill-dependent recommendations for multi-objective team composition

    PubMed Central

    Dorn, Christoph; Skopik, Florian; Schall, Daniel; Dustdar, Schahram

    2011-01-01

    Web-based collaboration and virtual environments supported by various Web 2.0 concepts enable the application of numerous monitoring, mining and analysis tools to study human interactions and team formation processes. The composition of an effective team requires a balance between adequate skill fulfillment and sufficient team connectivity. The underlying interaction structure reflects social behavior and relations of individuals and determines to a large degree how well people can be expected to collaborate. In this paper we address an extended team formation problem that does not only require direct interactions to determine team connectivity but additionally uses implicit recommendations of collaboration partners to support even sparsely connected networks. We provide two heuristics based on Genetic Algorithms and Simulated Annealing for discovering efficient team configurations that yield the best trade-off between skill coverage and team connectivity. Our self-adjusting mechanism aims to discover the best combination of direct interactions and recommendations when deriving connectivity. We evaluate our approach based on multiple configurations of a simulated collaboration network that features close resemblance to real world expert networks. We demonstrate that our algorithm successfully identifies efficient team configurations even when removing up to 40% of experts from various social network configurations. PMID:22298939

  8. NEXT GENERATION ANALYSIS SOFTWARE FOR COMPONENT EVALUATION - Results of Rotational Seismometer Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, D. M.; Merchant, B. J.; Abbott, R. E.

    2012-12-01

    The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.

  9. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  10. Use of smart photochromic indicator for dynamic monitoring of the shelf life of chilled chicken based products.

    PubMed

    Brizio, Ana Paula Dutra Resem; Prentice, Carlos

    2014-03-01

    This study evaluated the applicability of a photochromic time temperature indicator (TTI) to monitor the time-temperature history and shelf life of chilled boneless chicken breast. The results showed that the smart indicator showed good reproducibility during the discoloring process in all the conditions investigated. The response was not only visibly interpretable but also well adaptable to measurement using appropriate equipment. For an activation configuration of 4 s of ultraviolet light (UV) per label, the TTI's rate of discoloration was similar to the quality loss of the meat samples analyzed. Thus, the photochromic label (4 s UV/label) attached to the samples set out to be a dynamic shelf-life label, assuring consumers the final point of quality of chilled boneless chicken breast in an easy and precise form, providing a reliable tool to monitor the supply chain of this product. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Real-time seismic monitoring needs of a building owner - And the solution: A cooperative effort

    USGS Publications Warehouse

    Celebi, M.; Sanli, A.; Sinclair, M.; Gallant, S.; Radulescu, D.

    2004-01-01

    A recently implemented advanced seismic monitoring system for a 24-story building facilitates recording of accelerations and computing displacements and drift ratios in near-real time to measure the earthquake performance of the building. The drift ratio is related to the damage condition of the specific building. This system meets the owner's needs for rapid quantitative input to assessments and decisions on post-earthquake occupancy. The system is now successfully working and, in absence of strong shaking to date, is producing low-amplitude data in real time for routine analyses and assessment. Studies of such data to date indicate that the configured monitoring system with its building specific software can be a useful tool in rapid assessment of buildings and other structures following an earthquake. Such systems can be used for health monitoring of a building, for assessing performance-based design and analyses procedures, for long-term assessment of structural characteristics, and for long-term damage detection.

  12. Monitoring of heparin concentration in serum by Raman spectroscopy within hollow core photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Khetani, Altaf; Tiwari, Vidhu S.; Harb, Alaa; Anis, Hanan

    2011-08-01

    The feasibility of using hollow core photonic crystal fiber (HC-PCF) in conjunction with Raman spectroscopy has been explored for real time monitoring of heparin concentration in serum. Heparin is an important blood anti-coagulant whose precise monitoring and controlling in patients undergoing cardiac surgery and dialysis is of utmost importance. Our method of heparin monitoring offers a novel alternative to existing clinical procedures in terms of accuracy, response time and sample volume. The optical design configuration simply involves a 785-nm laser diode whose light is coupled into HC-PCF filled with heparin-serum mixtures. By non-selectively filling HC-PCF, a strong modal field overlap is obtained. Consequently, an enhanced Raman signal (>90 times) is obtained from various heparin-serum mixtures filled HC-PCFs compared to its bulk counterpart (cuvette). The present scheme has the potential to serve as a `generic biosensing tool' for diagnosing a wide range of biological samples.

  13. Image processing developments and applications for water quality monitoring and trophic state determination

    NASA Technical Reports Server (NTRS)

    Blackwell, R. J.

    1982-01-01

    Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.

  14. Process control monitoring systems, industrial plants, and process control monitoring methods

    DOEpatents

    Skorpik, James R [Kennewick, WA; Gosselin, Stephen R [Richland, WA; Harris, Joe C [Kennewick, WA

    2010-09-07

    A system comprises a valve; a plurality of RFID sensor assemblies coupled to the valve to monitor a plurality of parameters associated with the valve; a control tag configured to wirelessly communicate with the respective tags that are coupled to the valve, the control tag being further configured to communicate with an RF reader; and an RF reader configured to selectively communicate with the control tag, the reader including an RF receiver. Other systems and methods are also provided.

  15. Integrating reliability and maintainability into a concurrent engineering environment

    NASA Astrophysics Data System (ADS)

    Phillips, Clifton B.; Peterson, Robert R.

    1993-02-01

    This paper describes the results of a reliability and maintainability study conducted at the University of California, San Diego and supported by private industry. Private industry thought the study was important and provided the university access to innovative tools under cooperative agreement. The current capability of reliability and maintainability tools and how they fit into the design process is investigated. The evolution of design methodologies leading up to today's capability is reviewed for ways to enhance the design process while keeping cost under control. A method for measuring the consequences of reliability and maintainability policy for design configurations in an electronic environment is provided. The interaction of selected modern computer tool sets is described for reliability, maintainability, operations, and other elements of the engineering design process. These tools provide a robust system evaluation capability that brings life cycle performance improvement information to engineers and their managers before systems are deployed, and allow them to monitor and track performance while it is in operation.

  16. Optical Network Virtualisation Using Multitechnology Monitoring and SDN-Enabled Optical Transceiver

    NASA Astrophysics Data System (ADS)

    Ou, Yanni; Davis, Matthew; Aguado, Alejandro; Meng, Fanchao; Nejabati, Reza; Simeonidou, Dimitra

    2018-05-01

    We introduce the real-time multi-technology transport layer monitoring to facilitate the coordinated virtualisation of optical and Ethernet networks supported by optical virtualise-able transceivers (V-BVT). A monitoring and network resource configuration scheme is proposed to include the hardware monitoring in both Ethernet and Optical layers. The scheme depicts the data and control interactions among multiple network layers under the software defined network (SDN) background, as well as the application that analyses the monitored data obtained from the database. We also present a re-configuration algorithm to adaptively modify the composition of virtual optical networks based on two criteria. The proposed monitoring scheme is experimentally demonstrated with OpenFlow (OF) extensions for a holistic (re-)configuration across both layers in Ethernet switches and V-BVTs.

  17. Configuration Analysis Tool (CAT). System Description and users guide (revision 1)

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Mcgarry, F. E.; Merwarth, P.

    1982-01-01

    A system description of, and user's guide for, the Configuration Analysis Tool (CAT) are presented. As a configuration management tool, CAT enhances the control of large software systems by providing a repository for information describing the current status of a project. CAT provides an editing capability to update the information and a reporting capability to present the information. CAT is an interactive program available in versions for the PDP-11/70 and VAX-11/780 computers.

  18. Effects of tools inserted through snake-like surgical manipulators.

    PubMed

    Murphy, Ryan J; Otake, Yoshito; Wolfe, Kevin C; Taylor, Russell H; Armand, Mehran

    2014-01-01

    Snake-like manipulators with a large, open lumen can offer improved treatment alternatives for minimally-and less-invasive surgeries. In these procedures, surgeons use the manipulator to introduce and control flexible tools in the surgical environment. This paper describes a predictive algorithm for estimating manipulator configuration given tip position for nonconstant curvature, cable-driven manipulators using energy minimization. During experimental bending of the manipulator with and without a tool inserted in its lumen, images were recorded from an overhead camera in conjunction with actuation cable tension and length. To investigate the accuracy, the estimated manipulator configuration from the model and the ground-truth configuration measured from the image were compared. Additional analysis focused on the response differences for the manipulator with and without a tool inserted through the lumen. Results indicate that the energy minimization model predicts manipulator configuration with an error of 0.24 ± 0.22mm without tools in the lumen and 0.24 ± 0.19mm with tools in the lumen (no significant difference, p = 0.81). Moreover, tools did not introduce noticeable perturbations in the manipulator trajectory; however, there was an increase in requisite force required to reach a configuration. These results support the use of the proposed estimation method for calculating the shape of the manipulator with an tool inserted in its lumen when an accuracy range of at least 1mm is required.

  19. Multiplexer/Demultiplexer Loading Tool (MDMLT)

    NASA Technical Reports Server (NTRS)

    Brewer, Lenox Allen; Hale, Elizabeth; Martella, Robert; Gyorfi, Ryan

    2012-01-01

    The purpose of the MDMLT is to improve the reliability and speed of loading multiplexers/demultiplexers (MDMs) in the Software Development and Integration Laboratory (SDIL) by automating the configuration management (CM) of the loads in the MDMs, automating the loading procedure, and providing the capability to load multiple or all MDMs concurrently. This loading may be accomplished in parallel, or single MDMs (remote). The MDMLT is a Web-based tool that is capable of loading the entire International Space Station (ISS) MDM configuration in parallel. It is able to load Flight Equivalent Units (FEUs), enhanced, standard, and prototype MDMs as well as both EEPROM (Electrically Erasable Programmable Read-Only Memory) and SSMMU (Solid State Mass Memory Unit) (MASS Memory). This software has extensive configuration management to track loading history, and the performance improvement means of loading the entire ISS MDM configuration of 49 MDMs in approximately 30 minutes, as opposed to 36 hours, which is what it took previously utilizing the flight method of S-Band uplink. The laptop version recently added to the MDMLT suite allows remote lab loading with the CM of information entered into a common database when it is reconnected to the network. This allows the program to reconfigure the test rigs quickly between shifts, allowing the lab to support a variety of onboard configurations during a single day, based on upcoming or current missions. The MDMLT Computer Software Configuration Item (CSCI) supports a Web-based command and control interface to the user. An interface to the SDIL File Transfer Protocol (FTP) server is supported to import Integrated Flight Loads (IFLs) and Internal Product Release Notes (IPRNs) into the database. An interface to the Monitor and Control System (MCS) is supported to control the power state, and to enable or disable the debug port of the MDMs to be loaded. Two direct interfaces to the MDM are supported: a serial interface (debug port) to receive MDM memory dump data and the calculated checksum, and the Small Computer System Interface (SCSI) to transfer load files to MDMs with hard disks. File transfer from the MDM Loading Tool to EEPROM within the MDM is performed via the MILSTD- 1553 bus, making use of the Real- Time Input/Output Processors (RTIOP) when using the rig-based MDMLT, and via a bus box when using the laptop MDMLT. The bus box is a cost-effective alternative to PC-1553 cards for the laptop. It is noted that this system can be modified and adapted to any avionic laboratory for spacecraft computer loading, ship avionics, or aircraft avionics where multiple configurations and strong configuration management of software/firmware loads are required.

  20. Distributed observing facility for remote access to multiple telescopes

    NASA Astrophysics Data System (ADS)

    Callegari, Massimo; Panciatici, Antonio; Pasian, Fabio; Pucillo, Mauro; Santin, Paolo; Aro, Simo; Linde, Peter; Duran, Maria A.; Rodriguez, Jose A.; Genova, Francoise; Ochsenbein, Francois; Ponz, J. D.; Talavera, Antonio

    2000-06-01

    The REMOT (Remote Experiment Monitoring and conTrol) project was financed by 1996 by the European Community in order to investigate the possibility of generalizing the remote access to scientific instruments. After the feasibility of this idea was demonstrated, the DYNACORE (DYNAmically, COnfigurable Remote Experiment monitoring and control) project was initiated as a REMOT follow-up. Its purpose is to develop software technology to support scientists in two different domains, astronomy and plasma physics. The resulting system allows (1) simultaneous multiple user access to different experimental facilities, (2) dynamic adaptability to different kinds of real instruments, (3) exploitation of the communication infrastructures features, (4) ease of use through intuitive graphical interfaces, and (5) additional inter-user communication using off-the-shelf projects such as video-conference tools, chat programs and shared blackboards.

  1. Developing a Framework for Effective Network Capacity Planning

    NASA Technical Reports Server (NTRS)

    Yaprak, Ece

    2005-01-01

    As Internet traffic continues to grow exponentially, developing a clearer understanding of, and appropriately measuring, network's performance is becoming ever more critical. An important challenge faced by the Information Resources Directorate (IRD) at the Johnson Space Center in this context remains not only monitoring and maintaining a secure network, but also better understanding the capacity and future growth potential boundaries of its network. This requires capacity planning which involves modeling and simulating different network alternatives, and incorporating changes in design as technologies, components, configurations, and applications change, to determine optimal solutions in light of IRD's goals, objectives and strategies. My primary task this summer was to address this need. I evaluated network-modeling tools from OPNET Technologies Inc. and Compuware Corporation. I generated a baseline model for Building 45 using both tools by importing "real" topology/traffic information using IRD's various network management tools. I compared each tool against the other in terms of the advantages and disadvantages of both tools to accomplish IRD's goals. I also prepared step-by-step "how to design a baseline model" tutorial for both OPNET and Compuware products.

  2. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  3. Building energy analysis tool

    DOEpatents

    Brackney, Larry; Parker, Andrew; Long, Nicholas; Metzger, Ian; Dean, Jesse; Lisell, Lars

    2016-04-12

    A building energy analysis system includes a building component library configured to store a plurality of building components, a modeling tool configured to access the building component library and create a building model of a building under analysis using building spatial data and using selected building components of the plurality of building components stored in the building component library, a building analysis engine configured to operate the building model and generate a baseline energy model of the building under analysis and further configured to apply one or more energy conservation measures to the baseline energy model in order to generate one or more corresponding optimized energy models, and a recommendation tool configured to assess the one or more optimized energy models against the baseline energy model and generate recommendations for substitute building components or modifications.

  4. "Development Radar": The Co-Configuration of a Tool in a Learning Network

    ERIC Educational Resources Information Center

    Toiviainen, Hanna; Kerosuo, Hannele; Syrjala, Tuula

    2009-01-01

    Purpose: The paper aims to argue that new tools are needed for operating, developing and learning in work-life networks where academic and practice knowledge are intertwined in multiple levels of and in boundary-crossing across activities. At best, tools for learning are designed in a process of co-configuration, as the analysis of one tool,…

  5. Parametric electrical impedance tomography for measuring bone mineral density in the pelvis using a computational model.

    PubMed

    Kimel-Naor, Shani; Abboud, Shimon; Arad, Marina

    2016-08-01

    Osteoporosis is defined as bone microstructure deterioration resulting a decrease of bone's strength. Measured bone mineral density (BMD) constitutes the main tool for Osteoporosis diagnosis, management, and defines patient's fracture risk. In the present study, parametric electrical impedance tomography (pEIT) method was examined for monitoring BMD, using a computerized simulation model and preliminary real measurements. A numerical solver was developed to simulate surface potentials measured over a 3D computerized pelvis model. Varying cortical and cancellous BMD were simulated by changing bone conductivity and permittivity. Up to 35% and 16% change was found in the real and imaginary modules of the calculated potential, respectively, while BMD changes from 100% (normal) to 60% (Osteoporosis). Negligible BMD relative error was obtained with SNR>60 [dB]. Position changes errors indicate that for long term monitoring, measurement should be taken at the same geometrical configuration with great accuracy. The numerical simulations were compared to actual measurements that were acquired from a healthy male subject using a five electrodes belt bioimpedance device. The results suggest that pEIT may provide an inexpensive easy to use tool for frequent monitoring BMD in small clinics during pharmacological treatment, as a complementary method to DEXA test. Copyright © 2016. Published by Elsevier Ltd.

  6. System and method for incremental forming

    DOEpatents

    Beltran, Michael; Cao, Jian; Roth, John T.

    2015-12-29

    A system includes a frame configured to hold a workpiece and first and second tool positioning assemblies configured to be opposed to each other on opposite sides of the workpiece. The first and second tool positioning assemblies each include a toolholder configured to secure a tool to the tool positioning assembly, a first axis assembly, a second axis assembly, and a third axis assembly. The first, second, and third axis assemblies are each configured to articulate the toolholder along a respective axis. Each axis assembly includes first and second guides extending generally parallel to the corresponding axis and disposed on opposing sides of the toolholder with respect to the corresponding axis. Each axis assembly includes first and second carriages articulable along the first and second guides of the axis assembly, respectively, in the direction of the corresponding axis.

  7. Subsonic Wing Optimization for Handling Qualities Using ACSYNT

    NASA Technical Reports Server (NTRS)

    Soban, Danielle Suzanne

    1996-01-01

    The capability to accurately and rapidly predict aircraft stability derivatives using one comprehensive analysis tool has been created. The PREDAVOR tool has the following capabilities: rapid estimation of stability derivatives using a vortex lattice method, calculation of a longitudinal handling qualities metric, and inherent methodology to optimize a given aircraft configuration for longitudinal handling qualities, including an intuitive graphical interface. The PREDAVOR tool may be applied to both subsonic and supersonic designs, as well as conventional and unconventional, symmetric and asymmetric configurations. The workstation-based tool uses as its model a three-dimensional model of the configuration generated using a computer aided design (CAD) package. The PREDAVOR tool was applied to a Lear Jet Model 23 and the North American XB-70 Valkyrie.

  8. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.

  9. GI-conf: A configuration tool for the GI-cat distributed catalog

    NASA Astrophysics Data System (ADS)

    Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.

    2009-04-01

    In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.

  10. A scalable architecture for online anomaly detection of WLCG batch jobs

    NASA Astrophysics Data System (ADS)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  11. Monitoring the CMS strip tracker readout system

    NASA Astrophysics Data System (ADS)

    Mersi, S.; Bainbridge, R.; Baulieu, G.; Bel, S.; Cole, J.; Cripps, N.; Delaere, C.; Drouhin, F.; Fulcher, J.; Giassi, A.; Gross, L.; Hahn, K.; Mirabito, L.; Nikolic, M.; Tkaczyk, S.; Wingham, M.

    2008-07-01

    The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system.

  12. GIS Application System Design Applied to Information Monitoring

    NASA Astrophysics Data System (ADS)

    Qun, Zhou; Yujin, Yuan; Yuena, Kang

    Natural environment information management system involves on-line instrument monitoring, data communications, database establishment, information management software development and so on. Its core lies in collecting effective and reliable environmental information, increasing utilization rate and sharing degree of environment information by advanced information technology, and maximizingly providing timely and scientific foundation for environmental monitoring and management. This thesis adopts C# plug-in application development and uses a set of complete embedded GIS component libraries and tools libraries provided by GIS Engine to finish the core of plug-in GIS application framework, namely, the design and implementation of framework host program and each functional plug-in, as well as the design and implementation of plug-in GIS application framework platform. This thesis adopts the advantages of development technique of dynamic plug-in loading configuration, quickly establishes GIS application by visualized component collaborative modeling and realizes GIS application integration. The developed platform is applicable to any application integration related to GIS application (ESRI platform) and can be as basis development platform of GIS application development.

  13. Stereoscopic Configurations To Minimize Distortions

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.

    1991-01-01

    Proposed television system provides two stereoscopic displays. Two-camera, two-monitor system used in various camera configurations and with stereoscopic images on monitors magnified to various degrees. Designed to satisfy observer's need to perceive spatial relationships accurately throughout workspace or to perceive them at high resolution in small region of workspace. Potential applications include industrial, medical, and entertainment imaging and monitoring and control of telemanipulators, telerobots, and remotely piloted vehicles.

  14. Distinct contribution of the parietal and temporal cortex to hand configuration and contextual judgements about tools.

    PubMed

    Andres, Michael; Pelgrims, Barbara; Olivier, Etienne

    2013-09-01

    Neuropsychological studies showed that manipulatory and semantic knowledge can be independently impaired in patients with upper-limb apraxia, leading to different tool use disorders. The present study aimed to dissociate the brain regions involved in judging the hand configuration or the context associated to tool use. We focussed on the left supramarginalis gyrus (SMG) and left middle temporal gyrus (MTG), whose activation, as evidenced by functional magnetic resonance imaging (fMRI) studies, suggests that they may play a critical role in tool use. The distinctive location of SMG in the dorsal visual stream led us to postulate that this parietal region could play a role in processing incoming information about tools to shape hand posture. In contrast, we hypothesized that MTG, because of its interconnections with several cortical areas involved in semantic memory, could contribute to retrieving semantic information necessary to create a contextual representation of tool use. To test these hypotheses, we used neuronavigated transcranial magnetic stimulation (TMS) to interfere transiently with the function of either left SMG or left MTG in healthy participants performing judgement tasks about either hand configuration or context of tool use. We found that SMG virtual lesions impaired hand configuration but not contextual judgements, whereas MTG lesions selectively interfered with judgements about the context of tool use while leaving hand configuration judgements unaffected. This double dissociation demonstrates that the ability to infer a context of use or a hand posture from tool perception relies on distinct processes, performed in the temporal and parietal regions. The present findings suggest that tool use disorders caused by SMG lesions will be characterized by difficulties in selecting the appropriate hand posture for tool use, whereas MTG lesions will yield difficulties in using tools in the appropriate context. Copyright © 2012. Published by Elsevier Ltd.

  15. Requirements management for Gemini Observatory: a small organization with big development projects

    NASA Astrophysics Data System (ADS)

    Close, Madeline; Serio, Andrew; Cordova, Martin; Hardie, Kayla

    2016-08-01

    Gemini Observatory is an astronomical observatory operating two premier 8m-class telescopes, one in each hemisphere. As an operational facility, a majority of Gemini's resources are spent on operations however the observatory undertakes major development projects as well. Current projects include new facility science instruments, an operational paradigm shift to full remote operations, and new operations tools for planning, configuration and change control. Three years ago, Gemini determined that a specialized requirements management tool was needed. Over the next year, the Gemini Systems Engineering Group investigated several tools, selected one for a trial period and configured it for use. Configuration activities including definition of systems engineering processes, development of a requirements framework, and assignment of project roles to tool roles. Test projects were implemented in the tool. At the conclusion of the trial, the group determined that the Gemini could meet its requirements management needs without use of a specialized requirements management tool, and the group identified a number of lessons learned which are described in the last major section of this paper. These lessons learned include how to conduct an organizational needs analysis prior to pursuing a tool; caveats concerning tool criteria and the selection process; the prerequisites and sequence of activities necessary to achieve an optimum configuration of the tool; the need for adequate staff resources and staff training; and a special note regarding organizations in transition and archiving of requirements.

  16. Monitoring devices and systems for monitoring frequency hopping wireless communications, and related methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derr, Kurt W.; Richardson, John G.

    Monitoring devices and systems comprise a plurality of data channel modules coupled to processing circuitry. Each data channel module of the plurality of data channel modules is configured to capture wireless communications for a selected frequency channel. The processing circuitry is configured to receive captured wireless communications from the plurality of data channel modules and to organize received wireless communications according to at least one parameter. Related methods of monitoring wireless communications are also disclosed.

  17. Managing research and surveillance projects in real-time with a novel open-source eManagement tool designed for under-resourced countries.

    PubMed

    Steiner, Andreas; Hella, Jerry; Grüninger, Servan; Mhalu, Grace; Mhimbira, Francis; Cercamondi, Colin I; Doulla, Basra; Maire, Nicolas; Fenner, Lukas

    2016-09-01

    A software tool is developed to facilitate data entry and to monitor research projects in under-resourced countries in real-time. The eManagement tool "odk_planner" is written in the scripting languages PHP and Python. The odk_planner is lightweight and uses minimal internet resources. It was designed to be used with the open source software Open Data Kit (ODK). The users can easily configure odk_planner to meet their needs, and the online interface displays data collected from ODK forms in a graphically informative way. The odk_planner also allows users to upload pictures and laboratory results and sends text messages automatically. User-defined access rights protect data and privacy. We present examples from four field applications in Tanzania successfully using the eManagement tool: 1) clinical trial; 2) longitudinal Tuberculosis (TB) Cohort Study with a complex visit schedule, where it was used to graphically display missing case report forms, upload digitalized X-rays, and send text message reminders to patients; 3) intervention study to improve TB case detection, carried out at pharmacies: a tablet-based electronic referral system monitored referred patients, and sent automated messages to remind pharmacy clients to visit a TB Clinic; and 4) TB retreatment case monitoring designed to improve drug resistance surveillance: clinicians at four public TB clinics and lab technicians at the TB reference laboratory used a smartphone-based application that tracked sputum samples, and collected clinical and laboratory data. The user friendly, open source odk_planner is a simple, but multi-functional, Web-based eManagement tool with add-ons that helps researchers conduct studies in under-resourced countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Wireless sensor systems and methods, and methods of monitoring structures

    DOEpatents

    Kunerth, Dennis C.; Svoboda, John M.; Johnson, James T.; Harding, L. Dean; Klingler, Kerry M.

    2007-02-20

    A wireless sensor system includes a passive sensor apparatus configured to be embedded within a concrete structure to monitor infiltration of contaminants into the structure. The sensor apparatus includes charging circuitry and a plurality of sensors respectively configured to measure environmental parameters of the structure which include information related to the infiltration of contaminants into the structure. A reader apparatus is communicatively coupled to the sensor apparatus, the reader apparatus being configured to provide power to the charging circuitry during measurements of the environmental parameters by the sensors. The reader apparatus is configured to independently interrogate individual ones of the sensors to obtain information measured by the individual sensors. The reader apparatus is configured to generate an induction field to energize the sensor apparatus. Information measured by the sensor apparatus is transmitted to the reader apparatus via a response signal that is superimposed on a return induction field generated by the sensor apparatus. Methods of monitoring structural integrity of the structure are also provided.

  19. Spearfishing data reveals the littoral fish communities' association to coastal configuration

    NASA Astrophysics Data System (ADS)

    Boada, Jordi; Sagué, Oscar; Gordoa, Ana

    2017-12-01

    Increasing the knowledge about littoral fish communities is important for ecological sciences, fisheries and the sustainability of human communities. The scarcity of baseline data at large spatial scales in a fast-changing world makes it necessary to implement special programs to monitor natural ecosystems. In the present study, we evaluate littoral fish communities using data from spearfishing contests. The Catalan Federation of Underwater Activities (FECDAS) regularly organizes fishing contests across ca. 600 km of coast. Catch records made over the last sixteen years were used to study the fish communities along the coastline. We found two different communities that are closely related to the habitat configuration at a regional level. Interestingly, contests held on the northern coast were mainly grouped together and were characterized by species that inhabit complex rocky habitats, and contests held on the southern coast were grouped together and were mainly determined by soft bottoms species (i.e. mugilids and Sarpa salpa). In the south group the white sea bream was also very abundant compared to the north group. No significant changes in the community composition were found in the studied period and we successfully set descriptive baselines. Finally, based on these results we propose that studying the data from fishing contest records can be used to complement the available tools for monitoring fish communities.

  20. Ultra-sensitive chemical and biological analysis via specialty fibers with built-in microstructured optofluidic channels.

    PubMed

    Zhang, Nan; Li, Kaiwei; Cui, Ying; Wu, Zhifang; Shum, Perry Ping; Auguste, Jean-Louis; Dinh, Xuan Quyen; Humbert, Georges; Wei, Lei

    2018-02-13

    All-in-fiber optofluidics is an analytical tool that provides enhanced sensing performance with simplified analyzing system design. Currently, its advance is limited either by complicated liquid manipulation and light injection configuration or by low sensitivity resulting from inadequate light-matter interaction. In this work, we design and fabricate a side-channel photonic crystal fiber (SC-PCF) and exploit its versatile sensing capabilities in in-line optofluidic configurations. The built-in microfluidic channel of the SC-PCF enables strong light-matter interaction and easy lateral access of liquid samples in these analytical systems. In addition, the sensing performance of the SC-PCF is demonstrated with methylene blue for absorptive molecular detection and with human cardiac troponin T protein by utilizing a Sagnac interferometry configuration for ultra-sensitive and specific biomolecular specimen detection. Owing to the features of great flexibility and compactness, high-sensitivity to the analyte variation, and efficient liquid manipulation/replacement, the demonstrated SC-PCF offers a generic solution to be adapted to various fiber-waveguide sensors to detect a wide range of analytes in real time, especially for applications from environmental monitoring to biological diagnosis.

  1. GSM module for wireless radiation monitoring system via SMS

    NASA Astrophysics Data System (ADS)

    Rahman, Nur Aira Abd; Hisyam Ibrahim, Noor; Lombigit, Lojius; Azman, Azraf; Jaafar, Zainudin; Arymaswati Abdullah, Nor; Hadzir Patai Mohamad, Glam

    2018-01-01

    A customised Global System for Mobile communication (GSM) module is designed for wireless radiation monitoring through Short Messaging Service (SMS). This module is able to receive serial data from radiation monitoring devices such as survey meter or area monitor and transmit the data as text SMS to a host server. It provides two-way communication for data transmission, status query, and configuration setup. The module hardware consists of GSM module, voltage level shifter, SIM circuit and Atmega328P microcontroller. Microcontroller provides control for sending, receiving and AT command processing to GSM module. The firmware is responsible to handle task related to communication between device and host server. It process all incoming SMS, extract, and store new configuration from Host, transmits alert/notification SMS when the radiation data reach/exceed threshold value, and transmits SMS data at every fixed interval according to configuration. Integration of this module with radiation survey/monitoring device will create mobile and wireless radiation monitoring system with prompt emergency alert at high-level radiation.

  2. Turbomachine monitoring system and method

    DOEpatents

    Delvaux, John McConnell

    2016-02-23

    In an embodiment, a system includes a turbomachine having a first turbomachine component including a first mechanoluminescent material. The first turbomachine component is configured to produce a first light emission upon exposure to a mechanical stimulus sufficient to cause mechanoluminescence by the first mechanoluminescent material. The system also includes a turbomachine monitoring system configured to monitor the structural health of the first component based on detection of the first light emission.

  3. Assessing the Effects of Multi-Node Sensor Network Configurations on the Operational Tempo

    DTIC Science & Technology

    2014-09-01

    receiver, nP is the noise power of the receiver, and iL is the implementation loss of the receiver due to hardware manufacturing. The received...13. ABSTRACT (maximum 200 words) The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations by...INTENTIONALLY LEFT BLANK v ABSTRACT The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations

  4. Thermal energy storage devices, systems, and thermal energy storage device monitoring methods

    DOEpatents

    Tugurlan, Maria; Tuffner, Francis K; Chassin, David P.

    2016-09-13

    Thermal energy storage devices, systems, and thermal energy storage device monitoring methods are described. According to one aspect, a thermal energy storage device includes a reservoir configured to hold a thermal energy storage medium, a temperature control system configured to adjust a temperature of the thermal energy storage medium, and a state observation system configured to provide information regarding an energy state of the thermal energy storage device at a plurality of different moments in time.

  5. System and methods to determine and monitor changes in microstructural properties

    DOEpatents

    Turner, Joseph A

    2014-11-18

    A system and methods with which changes in microstructure properties such as grain size, grain elongation, texture, and porosity of materials can be determined and monitored over time to assess conditions such as stress and defects. An example system includes a number of ultrasonic transducers configured to transmit ultrasonic waves towards a target region on a specimen, a voltage source configured to excite the first and second ultrasonic transducers, and a processor configured to determine one or more properties of the specimen.

  6. Nose-to-tail analysis of an airbreathing hypersonic vehicle using an in-house simplified tool

    NASA Astrophysics Data System (ADS)

    Piscitelli, Filomena; Cutrone, Luigi; Pezzella, Giuseppe; Roncioni, Pietro; Marini, Marco

    2017-07-01

    SPREAD (Scramjet PREliminary Aerothermodynamic Design) is a simplified, in-house method developed by CIRA (Italian Aerospace Research Centre), able to provide a preliminary estimation of the performance of engine/aeroshape for airbreathing configurations. It is especially useful for scramjet engines, for which the strong coupling between the aerothermodynamic (external) and propulsive (internal) flow fields requires real-time screening of several engine/aeroshape configurations and the identification of the most promising one/s with respect to user-defined constraints and requirements. The outcome of this tool defines the base-line configuration for further design analyses with more accurate tools, e.g., CFD simulations and wind tunnel testing. SPREAD tool has been used to perform the nose-to-tail analysis of the LAPCAT-II Mach 8 MR2.4 vehicle configuration. The numerical results demonstrate SPREAD capability to quickly predict reliable values of aero-propulsive balance (i.e., net-thrust) and aerodynamic efficiency in a pre-design phase.

  7. Wireless communication devices and movement monitoring methods

    DOEpatents

    Skorpik, James R.

    2006-10-31

    Wireless communication devices and movement monitoring methods are described. In one aspect, a wireless communication device includes a housing, wireless communication circuitry coupled with the housing and configured to communicate wireless signals, movement circuitry coupled with the housing and configured to provide movement data regarding movement sensed by the movement circuitry, and event processing circuitry coupled with the housing and the movement circuitry, wherein the event processing circuitry is configured to process the movement data, and wherein at least a portion of the event processing circuitry is configured to operate in a first operational state having a different power consumption rate compared with a second operational state.

  8. Actively controlling coolant-cooled cold plate configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chainer, Timothy J.; Parida, Pritish R.

    A method is provided to facilitate active control of thermal and fluid dynamic performance of a coolant-cooled cold plate. The method includes: monitoring a variable associated with at least one of the coolant-cooled cold plate or one or more electronic components being cooled by the cold plate; and dynamically varying, based on the monitored variable, a physical configuration of the cold plate. By dynamically varying the physical configuration, the thermal and fluid dynamic performance of the cold plate are adjusted to, for example, optimally cool the one or more electronic components, and at the same time, reduce cooling power consumptionmore » used in cooling the electronic component(s). The physical configuration can be adjusted by providing one or more adjustable plates within the coolant-cooled cold plate, the positioning of which may be adjusted based on the monitored variable.« less

  9. Monitoring by forward scatter radar techniques: an improved second-order analytical model

    NASA Astrophysics Data System (ADS)

    Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco

    2017-10-01

    In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.

  10. Space Flight Operations Center local area network

    NASA Technical Reports Server (NTRS)

    Goodman, Ross V.

    1988-01-01

    The existing Mission Control and Computer Center at JPL will be replaced by the Space Flight Operations Center (SFOC). One part of the SFOC is the LAN-based distribution system. The purpose of the LAN is to distribute the processed data among the various elements of the SFOC. The SFOC LAN will provide a robust subsystem that will support the Magellan launch configuration and future project adaptation. Its capabilities include (1) a proven cable medium as the backbone for the entire network; (2) hardware components that are reliable, varied, and follow OSI standards; (3) accurate and detailed documentation for fault isolation and future expansion; and (4) proven monitoring and maintenance tools.

  11. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  12. Detection of periods of food intake using Support Vector Machines.

    PubMed

    Lopez-Meyer, Paulo; Schuckers, Stephanie; Makeyev, Oleksandr; Sazonov, Edward

    2010-01-01

    Studies of obesity and eating disorders need objective tools of Monitoring of Ingestive Behavior (MIB) that can detect and characterize food intake. In this paper we describe detection of food intake by a Support Vector Machine classifier trained on time history of chews and swallows. The training was performed on data collected from 18 subjects in 72 experiments involving eating and other activities (for example, talking). The highest accuracy of detecting food intake (94%) was achieved in configuration where both chews and swallows were used as predictors. Using only swallowing as a predictor resulted in 80% accuracy. Experimental results suggest that these two predictors may be used for differentiation between periods of resting and food intake with a resolution of 30 seconds. Proposed methods may be utilized for development of an accurate, inexpensive, and non-intrusive methodology to objectively monitor food intake in free living conditions.

  13. Intuitive operability evaluation of surgical robot using brain activity measurement to determine immersive reality.

    PubMed

    Miura, Satoshi; Kobayashi, Yo; Kawamura, Kazuya; Seki, Masatoshi; Nakashima, Yasutaka; Noguchi, Takehiko; Kasuya, Masahiro; Yokoo, Yuki; Fujie, Masakatsu G

    2012-01-01

    Surgical robots have improved considerably in recent years, but intuitive operability, which represents user inter-operability, has not been quantitatively evaluated. Therefore, for design of a robot with intuitive operability, we propose a method to measure brain activity to determine intuitive operability. The objective of this paper is to determine the master configuration against the monitor that allows users to perceive the manipulator as part of their own body. We assume that the master configuration produces an immersive reality experience for the user of putting his own arm into the monitor. In our experiments, as subjects controlled the hand controller to position the tip of the virtual slave manipulator on a target in a surgical simulator, we measured brain activity through brain-imaging devices. We performed our experiments for a variety of master manipulator configurations with the monitor position fixed. For all test subjects, we found that brain activity was stimulated significantly when the master manipulator was located behind the monitor. We conclude that this master configuration produces immersive reality through the body image, which is related to visual and somatic sense feedback.

  14. Simulation tools for guided wave based structural health monitoring

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Imperiale, Alexandre; Demaldent, Edouard; Baronian, Vahan; Chapuis, Bastien

    2018-04-01

    Structural Health Monitoring (SHM) is a thematic derived from Non Destructive Evaluation (NDE) based on the integration of sensors onto or into a structure in order to monitor its health without disturbing its regular operating cycle. Guided wave based SHM relies on the propagation of guided waves in plate-like or extruded structures. Using piezoelectric transducers to generate and receive guided waves is one of the most widely accepted paradigms due to the low cost and low weight of those sensors. A wide range of techniques for flaw detection based on the aforementioned setup is available in the literature but very few of these techniques have found industrial applications yet. A major difficulty comes from the sensitivity of guided waves to a substantial number of parameters such as the temperature or geometrical singularities, making guided wave measurement difficult to analyze. In order to apply guided wave based SHM techniques to a wider spectrum of applications and to transfer those techniques to the industry, the CEA LIST develops novel numerical methods. These methods facilitate the evaluation of the robustness of SHM techniques for multiple applicative cases and ease the analysis of the influence of various parameters, such as sensors positioning or environmental conditions. The first numerical tool is the guided wave module integrated to the commercial software CIVA, relying on a hybrid modal-finite element formulation to compute the guided wave response of perturbations (cavities, flaws…) in extruded structures of arbitrary cross section such as rails or pipes. The second numerical tool is based on the spectral element method [2] and simulates guided waves in both isotropic (metals) and orthotropic (composites) plate like-structures. This tool is designed to match the widely accepted sparse piezoelectric transducer array SHM configuration in which each embedded sensor acts as both emitter and receiver of guided waves. This tool is under development and will be adapted to simulate complex real-life structures such as curved composite panels with stiffeners. This communication will present these numerical tools and their main functionalities.

  15. Energy landscapes for a machine-learning prediction of patient discharge

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2016-06-01

    The energy landscapes framework is applied to a configuration space generated by training the parameters of a neural network. In this study the input data consists of time series for a collection of vital signs monitored for hospital patients, and the outcomes are patient discharge or continued hospitalisation. Using machine learning as a predictive diagnostic tool to identify patterns in large quantities of electronic health record data in real time is a very attractive approach for supporting clinical decisions, which have the potential to improve patient outcomes and reduce waiting times for discharge. Here we report some preliminary analysis to show how machine learning might be applied. In particular, we visualize the fitting landscape in terms of locally optimal neural networks and the connections between them in parameter space. We anticipate that these results, and analogues of thermodynamic properties for molecular systems, may help in the future design of improved predictive tools.

  16. Conducting Creativity Brainstorming Sessions in Small and Medium-Sized Enterprises Using Computer-Mediated Communication Tools

    NASA Astrophysics Data System (ADS)

    Murthy, Uday S.

    A variety of Web-based low cost computer-mediated communication (CMC) tools are now available for use by small and medium-sized enterprises (SME). These tools invariably incorporate chat systems that facilitate simultaneous input in synchronous electronic meeting environments, allowing what is referred to as “electronic brainstorming.” Although prior research in information systems (IS) has established that electronic brainstorming can be superior to face-to-face brainstorming, there is a lack of detailed guidance regarding how CMC tools should be optimally configured to foster creativity in SMEs. This paper discusses factors to be considered in using CMC tools for creativity brainstorming and proposes recommendations for optimally configuring CMC tools to enhance creativity in SMEs. The recommendations are based on lessons learned from several recent experimental studies on the use of CMC tools for rich brainstorming tasks that require participants to invoke domain-specific knowledge. Based on a consideration of the advantages and disadvantages of the various configuration options, the recommendations provided can form the basis for selecting a CMC tool for creativity brainstorming or for creating an in-house CMC tool for the purpose.

  17. Detecting and tracking dust outbreaks by using high temporal resolution satellite data

    NASA Astrophysics Data System (ADS)

    Sannazzaro, Filomena; Marchese, Francesco; Filizzola, Carolina; Tramutoli, Valerio; Pergola, Nicola; Mazzeo, Giuseppe; Paciello, Rossana

    2013-04-01

    A dust storm is a meteorological phenomenon generated by the action of wind, mainly in arid and semi-arid regions of the planet, particularly at subtropical latitudes. Dust outbreaks, of which frequency increases from year to year concurrently with climate change and reduction of moisture in the soil, may strongly impact on human activity as well as on environment and climate. Efficient early warning systems are then required to monitor them and to mitigate their effects. Satellite remote sensing thanks to a global coverage, to a high frequency of observation and low costs of data represents an important tool for studying and monitoring dust outbreaks. Several techniques have been then proposed to detect and monitor these phenomena from space, analyzing signal in different bands of the electromagnetic spectrum. In particular, methods based on the reverse absorption behaviour of silicate particles in comparison with ice crystals and water droplets, at 11 and 12 micron wavelengths, have been largely employed for detecting dust, although some important issues both in terms of reliability and sensitivity still remain. In this work, an optimized configuration of an innovative algorithm for dust detection, based on the largely accepted Robust Satellite Techniques (RST) multi-temporal approach, is then presented. This optimized algorithm configuration is tested here on Spinning Enhanced Visible and Infrared Imager (SEVIRI) data, analyzing some important dust events affecting Mediterranean basin in recent years. Results of this study, assessed on the basis of independent satellite-based aerosol products, generated by using the Total Ozone Mapping Spectrometer (TOMS), the Ozone Monitoring Instrument (OMI), and the Moderate Resolution Imaging Spectroradiometer (MODIS) data, show that when the spectral resolution of SEVIRI is properly exploited dust and meteorological clouds may be better discriminated. These results encourage further experimentations of the proposed algorithm in view of a possible future implementation in operational monitoring systems.

  18. An Environment for Guideline-based Decision Support Systems for Outpatients Monitoring.

    PubMed

    Zini, Elisa M; Lanzola, Giordano; Bossi, Paolo; Quaglini, Silvana

    2017-08-11

    We propose an architecture for monitoring outpatients that relies on mobile technologies for acquiring data. The goal is to better control the onset of possible side effects between the scheduled visits at the clinic. We analyze the architectural components required to ensure a high level of abstraction from data. Clinical practice guidelines were formalized with Alium, an authoring tool based on the PROforma language, using SNOMED-CT as a terminology standard. The Alium engine is accessible through a set of APIs that may be leveraged for implementing an application based on standard web technologies to be used by doctors at the clinic. Data sent by patients using mobile devices need to be complemented with those already available in the Electronic Health Record to generate personalized recommendations. Thus a middleware pursuing data abstraction is required. To comply with current standards, we adopted the HL7 Virtual Medical Record for Clinical Decision Support Logical Model, Release 2. The developed architecture for monitoring outpatients includes: (1) a guideline-based Decision Support System accessible through a web application that helps the doctors with prevention, diagnosis and treatment of therapy side effects; (2) an application for mobile devices, which allows patients to regularly send data to the clinic. In order to tailor the monitoring procedures to the specific patient, the Decision Support System also helps physicians with the configuration of the mobile application, suggesting the data to be collected and the associated collection frequency that may change over time, according to the individual patient's conditions. A proof of concept has been developed with a system for monitoring the side effects of chemo-radiotherapy in head and neck cancer patients. Our environment introduces two main innovation elements with respect to similar works available in the literature. First, in order to meet the specific patients' needs, in our work the Decision Support System also helps the physicians in properly configuring the mobile application. Then the Decision Support System is also continuously fed by patient-reported outcomes.

  19. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  20. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  1. Tank waste remediation system configuration management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vann, J.M.

    The configuration management program for the Tank Waste Remediation System (TWRS) Project Mission supports management of the project baseline by providing the mechanisms to identify, document, and control the functional and physical characteristics of the products. This document is one of the tools used to develop and control the mission and work. It is an integrated approach for control of technical, cost, schedule, and administrative information necessary to manage the configurations for the TWRS Project Mission. Configuration management focuses on five principal activities: configuration management system management, configuration identification, configuration status accounting, change control, and configuration management assessments. TWRS Projectmore » personnel must execute work in a controlled fashion. Work must be performed by verbatim use of authorized and released technical information and documentation. Application of configuration management will be consistently applied across all TWRS Project activities and assessed accordingly. The Project Hanford Management Contract (PHMC) configuration management requirements are prescribed in HNF-MP-013, Configuration Management Plan (FDH 1997a). This TWRS Configuration Management Plan (CMP) implements those requirements and supersedes the Tank Waste Remediation System Configuration Management Program Plan described in Vann, 1996. HNF-SD-WM-CM-014, Tank Waste Remediation System Configuration Management Implementation Plan (Vann, 1997) will be revised to implement the requirements of this plan. This plan provides the responsibilities, actions and tools necessary to implement the requirements as defined in the above referenced documents.« less

  2. EPICS as a MARTe Configuration Environment

    NASA Astrophysics Data System (ADS)

    Valcarcel, Daniel F.; Barbalace, Antonio; Neto, André; Duarte, André S.; Alves, Diogo; Carvalho, Bernardo B.; Carvalho, Pedro J.; Sousa, Jorge; Fernandes, Horácio; Goncalves, Bruno; Sartori, Filippo; Manduchi, Gabriele

    2011-08-01

    The Multithreaded Application Real-Time executor (MARTe) software provides an environment for the hard real-time execution of codes while leveraging a standardized algorithm development process. The Experimental Physics and Industrial Control System (EPICS) software allows the deployment and remote monitoring of networked control systems. Channel Access (CA) is the protocol that enables the communication between EPICS distributed components. It allows to set and monitor process variables across the network belonging to different systems. The COntrol and Data Acquisition and Communication (CODAC) system for the ITER Tokamak will be EPICS based and will be used to monitor and live configure the plant controllers. The reconfiguration capability in a hard real-time system requires strict latencies from the request to the actuation and it is a key element in the design of the distributed control algorithm. Presently, MARTe and its objects are configured using a well-defined structured language. After each configuration, all objects are destroyed and the system rebuilt, following the strong hard real-time rule that a real-time system in online mode must behave in a strictly deterministic fashion. This paper presents the design and considerations to use MARTe as a plant controller and enable it to be EPICS monitorable and configurable without disturbing the execution at any time, in particular during a plasma discharge. The solutions designed for this will be presented and discussed.

  3. In-line and Real-time Monitoring of Resonant Acoustic Mixing by Near-infrared Spectroscopy Combined with Chemometric Technology for Process Analytical Technology Applications in Pharmaceutical Powder Blending Systems.

    PubMed

    Tanaka, Ryoma; Takahashi, Naoyuki; Nakamura, Yasuaki; Hattori, Yusuke; Ashizawa, Kazuhide; Otsuka, Makoto

    2017-01-01

    Resonant acoustic ® mixing (RAM) technology is a system that performs high-speed mixing by vibration through the control of acceleration and frequency. In recent years, real-time process monitoring and prediction has become of increasing interest, and process analytical technology (PAT) systems will be increasingly introduced into actual manufacturing processes. This study examined the application of PAT with the combination of RAM, near-infrared spectroscopy, and chemometric technology as a set of PAT tools for introduction into actual pharmaceutical powder blending processes. Content uniformity was based on a robust partial least squares regression (PLSR) model constructed to manage the RAM configuration parameters and the changing concentration of the components. As a result, real-time monitoring may be possible and could be successfully demonstrated for in-line real-time prediction of active pharmaceutical ingredients and other additives using chemometric technology. This system is expected to be applicable to the RAM method for the risk management of quality.

  4. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  5. From EGEE Operations Portal towards EGI Operations Portal

    NASA Astrophysics Data System (ADS)

    Cordier, Hélène; L'Orphelin, Cyril; Reynaud, Sylvain; Lequeux, Olivier; Loikkanen, Sinikka; Veyre, Pierre

    Grid operators in EGEE have been using a dedicated dashboard as their central operational tool, stable and scalable for the last 5 years despite continuous upgrade from specifications by users, monitoring tools or data providers. In EGEE-III, recent regionalisation of operations led the Operations Portal developers to conceive a standalone instance of this tool. We will see how the dashboard reorganization paved the way for the re-engineering of the portal itself. The outcome is an easily deployable package customized with relevant information sources and specific decentralized operational requirements. This package is composed of a generic and scalable data access mechanism, Lavoisier; a renowned php framework for configuration flexibility, Symfony and a MySQL database. VO life cycle and operational information, EGEE broadcast and Downtime notifications are next for the major reorganization until all other key features of the Operations Portal are migrated to the framework. Features specifications will be sketched at the same time to adapt to EGI requirements and to upgrade. Future work on feature regionalisation, on new advanced features or strategy planning will be tracked in EGI- Inspire through the Operations Tools Advisory Group, OTAG, where all users, customers and third parties of the Operations Portal are represented from January 2010.

  6. Low Latitude Ionospheric Effects on Radiowave Propagation

    DTIC Science & Technology

    1998-06-01

    was used. Active earth-based observation equipment includes coherent and non-coherent scatter radars, and vertical and oblique incidence sounders...ionospheric monitoring during this experiment consisted of an oblique sounder, apparatus to measure time-of-flight of transionospheric signals, and an...is configured to monitor the ionosphere directly overhead in the vertical incidence configuration, or with an obliquely -launched antenna elevation

  7. Uniformity on the grid via a configuration framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igor V Terekhov et al.

    2003-03-11

    As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less

  8. IFIS Model-Plus: A Web-Based GUI for Visualization, Comparison and Evaluation of Distributed Flood Forecasts and Hindcasts

    NASA Astrophysics Data System (ADS)

    Krajewski, W. F.; Della Libera Zanchetta, A.; Mantilla, R.; Demir, I.

    2017-12-01

    This work explores the use of hydroinformatics tools to provide an user friendly and accessible interface for executing and assessing the output of realtime flood forecasts using distributed hydrological models. The main result is the implementation of a web system that uses an Iowa Flood Information System (IFIS)-based environment for graphical displays of rainfall-runoff simulation results for both real-time and past storm events. It communicates with ASYNCH ODE solver to perform large-scale distributed hydrological modeling based on segmentation of the terrain into hillslope-link hydrologic units. The cyber-platform also allows hindcast of model performance by testing multiple model configurations and assumptions of vertical flows in the soils. The scope of the currently implemented system is the entire set of contributing watersheds for the territory of the state of Iowa. The interface provides resources for visualization of animated maps for different water-related modeled states of the environment, including flood-waves propagation with classification of flood magnitude, runoff generation, surface soil moisture and total water column in the soil. Additional tools for comparing different model configurations and performing model evaluation by comparing to observed variables at monitored sites are also available. The user friendly interface has been published to the web under the URL http://ifis.iowafloodcenter.org/ifis/sc/modelplus/.

  9. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman, Carol S.; Benzinger, Leonora; Beshers, George; Hammerslag, David; Kimball, John; Kirslis, Peter A.; Render, Hal; Richards, Paul; Terwilliger, Robert

    1985-01-01

    The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. The SAGA system consists of a small number of software components that are adapted by the meta-tools into specific tools for use in the software development application. The modules are design so that the meta-tools can construct an environment which is both integrated and flexible. The SAGA project is documented in several papers which are presented.

  10. The optimization of needle electrode number and placement for irreversible electroporation of hepatocellular carcinoma

    PubMed Central

    Adeyanju, Oyinlolu O.; Al-Angari, Haitham M.; Sahakian, Alan V.

    2012-01-01

    Background Irreversible electroporation (IRE) is a novel ablation tool that uses brief high-voltage pulses to treat cancer. The efficacy of the therapy depends upon the distribution of the electric field, which in turn depends upon the configuration of electrodes used. Methods We sought to optimize the electrode configuration in terms of the distance between electrodes, the depth of electrode insertion, and the number of electrodes. We employed a 3D Finite Element Model and systematically varied the distance between the electrodes and the depth of electrode insertion, monitoring the lowest voltage sufficient to ablate the tumor, VIRE. We also measured the amount of normal (non-cancerous) tissue ablated. Measurements were performed for two electrodes, three electrodes, and four electrodes. The optimal electrode configuration was determined to be the one with the lowest VIRE, as that minimized damage to normal tissue. Results The optimal electrode configuration to ablate a 2.5 cm spheroidal tumor used two electrodes with a distance of 2 cm between the electrodes and a depth of insertion of 1 cm below the halfway point in the spherical tumor, as measured from the bottom of the electrode. This produced a VIRE of 3700 V. We found that it was generally best to have a small distance between the electrodes and for the center of the electrodes to be inserted at a depth equal to or deeper than the center of the tumor. We also found the distance between electrodes was far more important in influencing the outcome measures when compared with the depth of electrode insertion. Conclusions Overall, the distribution of electric field is highly dependent upon the electrode configuration, but the optimal configuration can be determined using numerical modeling. Our findings can help guide the clinical application of IRE as well as the selection of the best optimization algorithm to use in finding the optimal electrode configuration. PMID:23077449

  11. Innovation Configuration Mapping as a Professional Development Tool: The Case of One-to-One Laptop Computing

    ERIC Educational Resources Information Center

    Towndrow, Phillip A.; Fareed, Wan

    2015-01-01

    This article illustrates how findings from a study of teachers' and students' uses of laptop computers in a secondary school in Singapore informed the development of an Innovation Configuration (IC) Map--a tool for identifying and describing alternative ways of implementing innovations based on teachers' unique feelings, preoccupations, thoughts…

  12. Tool for a configurable integrated circuit that uses determination of dynamic power consumption

    NASA Technical Reports Server (NTRS)

    Davoodi, Azadeh (Inventor); French, Matthew C. (Inventor); Agarwal, Deepak (Inventor); Wang, Li (Inventor)

    2011-01-01

    A configurable logic tool that allows minimization of dynamic power within an FPGA design without changing user-entered specifications. The minimization of power may use minimized clock nets as a first order operation, and a second order operation that minimizes other factors, such as area of placement, area of clocks and/or slack.

  13. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    PubMed

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  14. Low temperature monitoring system for subsurface barriers

    DOEpatents

    Vinegar, Harold J [Bellaire, TX; McKinzie, II Billy John [Houston, TX

    2009-08-18

    A system for monitoring temperature of a subsurface low temperature zone is described. The system includes a plurality of freeze wells configured to form the low temperature zone, one or more lasers, and a fiber optic cable coupled to at least one laser. A portion of the fiber optic cable is positioned in at least one freeze well. At least one laser is configured to transmit light pulses into a first end of the fiber optic cable. An analyzer is coupled to the fiber optic cable. The analyzer is configured to receive return signals from the light pulses.

  15. Information processing requirements for on-board monitoring of automatic landing

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  16. Actively controlling coolant-cooled cold plate configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chainer, Timothy J.; Parida, Pritish R.

    Cooling apparatuses are provided to facilitate active control of thermal and fluid dynamic performance of a coolant-cooled cold plate. The cooling apparatus includes the cold plate and a controller. The cold plate couples to one or more electronic components to be cooled, and includes an adjustable physical configuration. The controller dynamically varies the adjustable physical configuration of the cold plate based on a monitored variable associated with the cold plate or the electronic component(s) being cooled by the cold plate. By dynamically varying the physical configuration, the thermal and fluid dynamic performance of the cold plate are adjusted to, formore » example, optimally cool the electronic component(s), and at the same time, reduce cooling power consumption used in cooling the electronic component(s). The physical configuration can be adjusted by providing one or more adjustable plates within the cold plate, the positioning of which may be adjusted based on the monitored variable.« less

  17. Extension of the lower bound of monitor solutions of maximally permissive supervisors to non-α net systems

    NASA Astrophysics Data System (ADS)

    Wu, W. H.; Chao, D. Y.

    2016-07-01

    Traditional region-based liveness-enforcing supervisors focus on (1) maximal permissiveness of not losing legal states, (2) structural simplicity of minimal number of monitors, and (3) fast computation. Lately, a number of similar approaches can achieve minimal configuration using efficient linear programming. However, it is unclear as to the relationship between the minimal configuration and the net structure. It is important to explore the structures involved for the fewest monitors required. Once the lower bound is achieved, further iteration to merge (or reduce the number of) monitors is not necessary. The minimal strongly connected resource subnet (i.e., all places are resources) that contains the set of resource places in a basic siphon is an elementary circuit. Earlier, we showed that the number of monitors required for liveness-enforcing and maximal permissiveness equals that of basic siphons for a subclass of Petri nets modelling manufacturing, called α systems. This paper extends this to systems more powerful than the α one so that the number of monitors in a minimal configuration remains to be lower bounded by that of basic siphons. This paper develops the theory behind and shows examples.

  18. Dynamic Monitoring Reveals Motor Task Characteristics in Prehistoric Technical Gestures

    PubMed Central

    Pfleging, Johannes; Stücheli, Marius; Iovita, Radu; Buchli, Jonas

    2015-01-01

    Reconstructing ancient technical gestures associated with simple tool actions is crucial for understanding the co-evolution of the human forelimb and its associated control-related cognitive functions on the one hand, and of the human technological arsenal on the other hand. Although the topic of gesture is an old one in Paleolithic archaeology and in anthropology in general, very few studies have taken advantage of the new technologies from the science of kinematics in order to improve replicative experimental protocols. Recent work in paleoanthropology has shown the potential of monitored replicative experiments to reconstruct tool-use-related motions through the study of fossil bones, but so far comparatively little has been done to examine the dynamics of the tool itself. In this paper, we demonstrate that we can statistically differentiate gestures used in a simple scraping task through dynamic monitoring. Dynamics combines kinematics (position, orientation, and speed) with contact mechanical parameters (force and torque). Taken together, these parameters are important because they play a role in the formation of a visible archaeological signature, use-wear. We present our new affordable, yet precise methodology for measuring the dynamics of a simple hide-scraping task, carried out using a pull-to (PT) and a push-away (PA) gesture. A strain gage force sensor combined with a visual tag tracking system records force, torque, as well as position and orientation of hafted flint stone tools. The set-up allows switching between two tool configurations, one with distal and the other one with perpendicular hafting of the scrapers, to allow for ethnographically plausible reconstructions. The data show statistically significant differences between the two gestures: scraping away from the body (PA) generates higher shearing forces, but requires greater hand torque. Moreover, most benchmarks associated with the PA gesture are more highly variable than in the PT gesture. These results demonstrate that different gestures used in ‘common’ prehistoric tasks can be distinguished quantitatively based on their dynamic parameters. Future research needs to assess our ability to reconstruct these parameters from observed use-wear patterns. PMID:26284785

  19. Towards easing the configuration and new team member accommodation for open source software based portals

    NASA Astrophysics Data System (ADS)

    Fu, L.; West, P.; Zednik, S.; Fox, P. A.

    2013-12-01

    For simple portals such as vocabulary based services, which contain small amounts of data and require only hyper-textual representation, it is often an overkill to adopt the whole software stack of database, middleware and front end, or to use a general Web development framework as the starting point of development. Directly combining open source software is a much more favorable approach. However, our experience with the Coastal and Marine Spatial Planning Vocabulary (CMSPV) service portal shows that there are still issues such as system configuration and accommodating a new team member that need to be handled carefully. In this contribution, we share our experience in the context of the CMSPV portal, and focus on the tools and mechanisms we've developed to ease the configuration job and the incorporation process of new project members. We discuss the configuration issues that arise when we don't have complete control over how the software in use is configured and need to follow existing configuration styles that may not be well documented, especially when multiple pieces of such software need to work together as a combined system. As for the CMSPV portal, it is built on two pieces of open source software that are still under rapid development: a Fuseki data server and Epimorphics Linked Data API (ELDA) front end. Both lack mature documentation and tutorials. We developed comparison and labeling tools to ease the problem of system configuration. Another problem that slowed down the project is that project members came and went during the development process, so new members needed to start with a partially configured system and incomplete documentation left by old members. We developed documentation/tutorial maintenance mechanisms based on our comparison and labeling tools to make it easier for the new members to be incorporated into the project. These tools and mechanisms also provided benefit to other projects that reused the software components from the CMSPV system.

  20. Effects of cutting parameters and machining environments on surface roughness in hard turning using design of experiment

    NASA Astrophysics Data System (ADS)

    Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan

    2016-07-01

    Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.

  1. Performance and evaluation of a coupled prognostic model TAPM over a mountainous complex terrain industrial area

    NASA Astrophysics Data System (ADS)

    Matthaios, Vasileios N.; Triantafyllou, Athanasios G.; Albanis, Triantafyllos A.; Sakkas, Vasileios; Garas, Stelios

    2018-05-01

    Atmospheric modeling is considered an important tool with several applications such as prediction of air pollution levels, air quality management, and environmental impact assessment studies. Therefore, evaluation studies must be continuously made, in order to improve the accuracy and the approaches of the air quality models. In the present work, an attempt is made to examine the air pollution model (TAPM) efficiency in simulating the surface meteorology, as well as the SO2 concentrations in a mountainous complex terrain industrial area. Three configurations under different circumstances, firstly with default datasets, secondly with data assimilation, and thirdly with updated land use, ran in order to investigate the surface meteorology for a 3-year period (2009-2011) and one configuration applied to predict SO2 concentration levels for the year of 2011.The modeled hourly averaged meteorological and SO2 concentration values were statistically compared with those from five monitoring stations across the domain to evaluate the model's performance. Statistical measures showed that the surface temperature and relative humidity are predicted well in all three simulations, with index of agreement (IOA) higher than 0.94 and 0.70 correspondingly, in all monitoring sites, while an overprediction of extreme low temperature values is noted, with mountain altitudes to have an important role. However, the results also showed that the model's performance is related to the configuration regarding the wind. TAPM default dataset predicted better the wind variables in the center of the simulation than in the boundaries, while improvement in the boundary horizontal winds implied the performance of TAPM with updated land use. TAPM assimilation predicted the wind variables fairly good in the whole domain with IOA higher than 0.83 for the wind speed and higher than 0.85 for the horizontal wind components. Finally, the SO2 concentrations were assessed by the model with IOA varied from 0.37 to 0.57, mostly dependent on the grid/monitoring station of the simulated domain. The present study can be used, with relevant adaptations, as a user guideline for future conducting simulations in mountainous complex terrain.

  2. The Development of Advanced Passive Acoustic Monitoring Systems Using microMARS Technology

    DTIC Science & Technology

    2015-09-30

    localization that will be available in a number of configurations for deep and shallow water environments alike. OBJECTIVES The project has two...through two test series, first targeting the GPS synchronized shallow water and then the self-synchronized deep water configurations. The project will...main objectives: 1. Development of all the components of a compact passive acoustic monitoring system suitable both for shallow water moored

  3. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  4. An ensemble-based algorithm for optimizing the configuration of an in situ soil moisture monitoring network

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, Niels; Verhoest, Niko E. C.; Gobeyn, Sacha; De Baets, Bernard; Verwaeren, Jan; Pauwels, Valentijn R. N.

    2015-04-01

    The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological modeling. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the smaller temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typically small integration volume of in situ measurements, and the often large spacing between monitoring locations. This causes only a small part of the modeling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically non-dynamic in time. Generally, e.g. when applying data assimilation, maximizing the observed information under given circumstances will lead to a better qualitative and quantitative insight of the hydrological system. It is therefore advisable to perform a prior analysis in order to select those monitoring locations which are most predictive for the unobserved modeling domain. This research focuses on optimizing the configuration of a soil moisture monitoring network in the catchment of the Bellebeek, situated in Belgium. A recursive algorithm, strongly linked to the equations of the Ensemble Kalman Filter, has been developed to select the most predictive locations in the catchment. The basic idea behind the algorithm is twofold. On the one hand a minimization of the modeled soil moisture ensemble error covariance between the different monitoring locations is intended. This causes the monitoring locations to be as independent as possible regarding the modeled soil moisture dynamics. On the other hand, the modeled soil moisture ensemble error covariance between the monitoring locations and the unobserved modeling domain is maximized. The latter causes a selection of monitoring locations which are more predictive towards unobserved locations. The main factors that will influence the outcome of the algorithm are the following: the choice of the hydrological model, the uncertainty model applied for ensemble generation, the general wetness of the catchment during which the error covariance is computed, etc. In this research the influence of the latter two is examined more in-depth. Furthermore, the optimal network configuration resulting from the newly developed algorithm is compared to network configurations obtained by two other algorithms. The first algorithm is based on a temporal stability analysis of the modeled soil moisture in order to identify catchment representative monitoring locations with regard to average conditions. The second algorithm involves the clustering of available spatially distributed data (e.g. land cover and soil maps) that is not obtained by hydrological modeling.

  5. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  6. Cooperative optimization of reconfigurable machine tool configurations and production process plan

    NASA Astrophysics Data System (ADS)

    Xie, Nan; Li, Aiping; Xue, Wei

    2012-09-01

    The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.

  7. Running and testing GRID services with Puppet at GRIF- IRFU

    NASA Astrophysics Data System (ADS)

    Ferry, S.; Schaer, F.; Meyer, JP

    2015-12-01

    GRIF is a distributed Tiers 2 centre, made of 6 different centres in the Paris region, and serving many VOs. The sub-sites are connected with 10 Gbps private network and share tools for central management. One of the sub-sites, GRIF-IRFU held and maintained in the CEA- Saclay centre, moved a year ago, to a configuration management using Puppet. Thanks to the versatility of Puppet/Foreman automation, the GRIF-IRFU site maintains usual grid services, with, among them: a CREAM-CE with a TORQUE+Maui (running a batch with more than 5000 jobs slots), a DPM storage of more than 2 PB, a Nagios monitoring essentially based on check_mk, as well as centralized services for the French NGI, like the accounting, or the argus central suspension system. We report on the actual functionalities of Puppet and present the last tests and evolutions including a monitoring with Graphite, a HT-condor multicore batch accessed with an ARC-CE and a CEPH storage file system.

  8. Optimization and Characterization of the Friction Stir Welded Sheets of AA 5754-H111: Monitoring of the Quality of Joints with Thermographic Techniques.

    PubMed

    De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Palumbo, Davide; De Finis, Rosa; Galietti, Umberto

    2017-10-11

    Friction Stir Welding (FSW) is a solid-state welding process, based on frictional and stirring phenomena, that offers many advantages with respect to the traditional welding methods. However, several parameters can affect the quality of the produced joints. In this work, an experimental approach has been used for studying and optimizing the FSW process, applied on 5754-H111 aluminum plates. In particular, the thermal behavior of the material during the process has been investigated and two thermal indexes, the maximum temperature and the heating rate of the material, correlated to the frictional power input, were investigated for different process parameters (the travel and rotation tool speeds) configurations. Moreover, other techniques (micrographs, macrographs and destructive tensile tests) were carried out for supporting in a quantitative way the analysis of the quality of welded joints. The potential of thermographic technique has been demonstrated both for monitoring the FSW process and for predicting the quality of joints in terms of tensile strength.

  9. Using task analysis to understand the Data System Operations Team

    NASA Technical Reports Server (NTRS)

    Holder, Barbara E.

    1994-01-01

    The Data Systems Operations Team (DSOT) currently monitors the Multimission Ground Data System (MGDS) at JPL. The MGDS currently supports five spacecraft and within the next five years, it will support ten spacecraft simultaneously. The ground processing element of the MGDS consists of a distributed UNIX-based system of over 40 nodes and 100 processes. The MGDS system provides operators with little or no information about the system's end-to-end processing status or end-to-end configuration. The lack of system visibility has become a critical issue in the daily operation of the MGDS. A task analysis was conducted to determine what kinds of tools were needed to provide DSOT with useful status information and to prioritize the tool development. The analysis provided the formality and structure needed to get the right information exchange between development and operations. How even a small task analysis can improve developer-operator communications is described, and the challenges associated with conducting a task analysis in a real-time mission operations environment are examined.

  10. The LHCb Run Control

    NASA Astrophysics Data System (ADS)

    Alessio, F.; Barandela, M. C.; Callot, O.; Duval, P.-Y.; Franek, B.; Frank, M.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Neufeld, N.; Sambade, A.; Schwemmer, R.; Somogyi, P.

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, J.

    The U. S. Department of Energy's (DOE) Office of Environmental Management (EM) has the responsibility for cleaning up 60 sites in 22 states that were associated with the legacy of the nation's nuclear weapons program and other research and development activities. These sites are unique and many of the technologies needed to successfully disposition the associated wastes have yet to be developed or would require significant re-engineering to be adapted for future EM cleanup efforts. In 2008, the DOE-EM Engineering and Technology Program (EM-22) released the Engineering and Technology Roadmap in response to Congressional direction and the need to focusmore » on longer term activities required for the completion of the aforementioned cleanup program. One of the strategic initiatives included in the Roadmap was to enhance long term performance monitoring as defined by 'Develop and deploy cost effective long-term strategies and technologies to monitor closure sites (including soil, groundwater, and surface water) with multiple contaminants (organics, metals and radionuclides) to verify integrated long-term cleanup performance'. To support this long-term monitoring (LTM) strategic initiative, EM 22 and the Savannah River National Laboratory (SRNL) organized and held an interactive symposia, known as the 2009 DOE-EM Long-Term Monitoring Technical Forum, to define and prioritize LTM improvement strategies and products that could be realized within a 3 to 5 year investment time frame. This near-term focus on fundamental research would then be used as a foundation for development of applied programs to improve the closure and long-term performance of EM's legacy waste sites. The Technical Forum was held in Atlanta, GA on February 11-12, 2009, and attended by 57 professionals with a focus on identifying those areas of opportunity that would most effectively advance the transition of the current practices to a more effective strategy for the LTM paradigm. The meeting format encompassed three break-out sessions, which focused on needs and opportunities associated with the following LTM technical areas: (1) Performance Monitoring Tools, (2) Systems, and (3) Information Management. The specific objectives of the Technical Forum were to identify: (1) technical targets for reducing EM costs for life-cycle monitoring; (2) cost-effective approaches and tools to support the transition from active to passive remedies at EM waste sites; and (3) specific goals and objectives associated with the lifecycle monitoring initiatives outlined within the Roadmap. The first Breakout Session on LTM performance measurement tools focused on the integration and improvement of LTM performance measurement and monitoring tools that deal with parameters such as ecosystems, boundary conditions, geophysics, remote sensing, biomarkers, ecological indicators and other types of data used in LTM configurations. Although specific tools were discussed, it was recognized that the Breakout Session could not comprehensively discuss all monitoring technologies in the time provided. Attendees provided key references where other organizations have assessed monitoring tools. Three investment sectors were developed in this Breakout Session. The second Breakout Session was on LTM systems. The focus of this session was to identify new and inventive LTM systems addressing the framework for interactive parameters such as infrastructure, sensors, diagnostic features, field screening tools, state of the art characterization monitoring systems/concepts, and ecosystem approaches to site conditions and evolution. LTM systems consist of the combination of data acquisition and management efforts, data processing and analysis efforts and reporting tools. The objective of the LTM systems workgroup was to provide a vision and path towards novel and innovative LTM systems, which should be able to provide relevant, actionable information on system performance in a cost-effective manner. Two investment sectors were developed in this Breakout Session. The last Breakout Session of the Technical Forum was on LTM information management. The session focus was on the development and implementation of novel information management systems for LTM including techniques to address data issues such as: efficient management of large and diverse datasets; consistency and comparability in data management and incorporation of accurate historical information; data interpretation and information synthesis including statistical methods, modeling, and visualization; and linage of data to site management objectives and leveraging information to forge consensus among stakeholders. One investment sector was developed in this Breakout Session.« less

  12. Monitoring of vapor phase polycyclic aromatic hydrocarbons

    DOEpatents

    Vo-Dinh, Tuan; Hajaligol, Mohammad R.

    2004-06-01

    An apparatus for monitoring vapor phase polycyclic aromatic hydrocarbons in a high-temperature environment has an excitation source producing electromagnetic radiation, an optical path having an optical probe optically communicating the electromagnetic radiation received at a proximal end to a distal end, a spectrometer or polychromator, a detector, and a positioner coupled to the first optical path. The positioner can slidably move the distal end of the optical probe to maintain the distal end position with respect to an area of a material undergoing combustion. The emitted wavelength can be directed to a detector in a single optical probe 180.degree. backscattered configuration, in a dual optical probe 180.degree. backscattered configuration or in a dual optical probe 90.degree. side scattered configuration. The apparatus can be used to monitor an emitted wavelength of energy from a polycyclic aromatic hydrocarbon as it fluoresces in a high temperature environment.

  13. Amplified OTDR systems for multipoint corrosion monitoring.

    PubMed

    Nascimento, Jehan F; Silva, Marcionilo J; Coêlho, Isnaldo J S; Cipriano, Eliel; Martins-Filho, Joaquim F

    2012-01-01

    We present two configurations of an amplified fiber-optic-based corrosion sensor using the optical time domain reflectometry (OTDR) technique as the interrogation method. The sensor system is multipoint, self-referenced, has no moving parts and can measure the corrosion rate several kilometers away from the OTDR equipment. The first OTDR monitoring system employs a remotely pumped in-line EDFA and it is used to evaluate the increase in system reach compared to a non-amplified configuration. The other amplified monitoring system uses an EDFA in booster configuration and we perform corrosion measurements and evaluations of system sensitivity to amplifier gain variations. Our experimental results obtained under controlled laboratory conditions show the advantages of the amplified system in terms of longer system reach with better spatial resolution, and also that the corrosion measurements obtained from our system are not sensitive to 3 dB gain variations.

  14. Amplified OTDR Systems for Multipoint Corrosion Monitoring

    PubMed Central

    Nascimento, Jehan F.; Silva, Marcionilo J.; Coêlho, Isnaldo J. S.; Cipriano, Eliel; Martins-Filho, Joaquim F.

    2012-01-01

    We present two configurations of an amplified fiber-optic-based corrosion sensor using the optical time domain reflectometry (OTDR) technique as the interrogation method. The sensor system is multipoint, self-referenced, has no moving parts and can measure the corrosion rate several kilometers away from the OTDR equipment. The first OTDR monitoring system employs a remotely pumped in-line EDFA and it is used to evaluate the increase in system reach compared to a non-amplified configuration. The other amplified monitoring system uses an EDFA in booster configuration and we perform corrosion measurements and evaluations of system sensitivity to amplifier gain variations. Our experimental results obtained under controlled laboratory conditions show the advantages of the amplified system in terms of longer system reach with better spatial resolution, and also that the corrosion measurements obtained from our system are not sensitive to 3 dB gain variations. PMID:22737017

  15. Improved electrode positions for local impedance measurements in the lung-a simulation study.

    PubMed

    Orschulik, Jakob; Petkau, Rudolf; Wartzek, Tobias; Hochhausen, Nadine; Czaplik, Michael; Leonhardt, Steffen; Teichmann, Daniel

    2016-12-01

    Impedance spectroscopy can be used to analyze the dielectric properties of various materials. In the biomedical domain, it is used as bioimpedance spectroscopy (BIS) to analyze the composition of body tissue. Being a non-invasive, real-time capable technique, it is a promising modality, especially in the field of lung monitoring. Unfortunately, up to now, BIS does not provide any regional lung information as the electrodes are usually placed in hand-to-hand or transthoracic configurations. Even though transthoracic electrode configurations are in general capable of monitoring the lung, no focusing to specific regions is achieved. In order to resolve this issue, we use a finite element model (FEM) of the human body to study the effect of different electrode configurations on measured BIS data. We present evaluation results and show suitable electrode configurations for eight lung regions. We show that, using these optimized configurations, BIS measurements can be focused to desired regions allowing local lung analysis.

  16. Additional self-monitoring tools in the dietary modification component of The Women's Health Initiative.

    PubMed

    Mossavar-Rahmani, Yasmin; Henry, Holly; Rodabough, Rebecca; Bragg, Charlotte; Brewer, Amy; Freed, Trish; Kinzel, Laura; Pedersen, Margaret; Soule, C Oehme; Vosburg, Shirley

    2004-01-01

    Self-monitoring promotes behavior changes by promoting awareness of eating habits and creates self-efficacy. It is an important component of the Women's Health Initiative dietary intervention. During the first year of intervention, 74% of the total sample of 19,542 dietary intervention participants self-monitored. As the study progressed the self-monitoring rate declined to 59% by spring 2000. Participants were challenged by inability to accurately estimate fat content of restaurant foods and the inconvenience of carrying bulky self-monitoring tools. In 1996, a Self-Monitoring Working Group was organized to develop additional self-monitoring options that were responsive to participant needs. This article describes the original and additional self-monitoring tools and trends in tool use over time. Original tools were the Food Diary and Fat Scan. Additional tools include the Keeping Track of Goals, Quick Scan, Picture Tracker, and Eating Pattern Changes instruments. The additional tools were used by the majority of participants (5,353 of 10,260 or 52% of participants who were self-monitoring) by spring 2000. Developing self-monitoring tools that are responsive to participant needs increases the likelihood that self-monitoring can enhance dietary reporting adherence, especially in long-term clinical trials.

  17. Engineering the smart factory

    NASA Astrophysics Data System (ADS)

    Harrison, Robert; Vera, Daniel; Ahmad, Bilal

    2016-10-01

    The fourth industrial revolution promises to create what has been called the smart factory. The vision is that within such modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralised decisions. This paper provides a view of this initiative from an automation systems perspective. In this context it considers how future automation systems might be effectively configured and supported through their lifecycles and how integration, application modelling, visualisation and reuse of such systems might be best achieved. The paper briefly describes limitations in current engineering methods, and new emerging approaches including the cyber physical systems (CPS) engineering tools being developed by the automation systems group (ASG) at Warwick Manufacturing Group, University of Warwick, UK.

  18. Risk assessment of occupational exposure to heavy metal mixtures: a study protocol.

    PubMed

    Omrane, Fatma; Gargouri, Imed; Khadhraoui, Moncef; Elleuch, Boubaker; Zmirou-Navier, Denis

    2018-03-05

    Sfax is a very industrialized city located in the southern region of Tunisia where heavy metals (HMs) pollution is now an established matter of fact. The health of its residents mainly those engaged in industrial metals-based activities is under threat. Indeed, such workers are being exposed to a variety of HMs mixtures, and this exposure has cumulative properties. Whereas current HMs exposure assessment is mainly carried out using direct air monitoring approaches, the present study aims to assess health risks associated with chronic occupational exposure to HMs in industry, using a modeling approach that will be validated later on. To this end, two questionnaires were used. The first was an identification/descriptive questionnaire aimed at identifying, for each company: the specific activities, materials used, manufactured products and number of employees exposed. The second related to the job-task of the exposed persons, workplace characteristics (dimensions, ventilation, etc.), type of metals and emission configuration in space and time. Indoor air HMs concentrations were predicted, based on the mathematical models generally used to estimate occupational exposure to volatile substances (such as solvents). Later on, and in order to validate the adopted model, air monitoring will be carried out, as well as some biological monitoring aimed at assessing HMs excretion in the urine of workers volunteering to participate. Lastly, an interaction-based hazard index HI int and a decision support tool will be used to predict the cumulative risk assessment for HMs mixtures. One hundred sixty-one persons working in the 5 participating companies have been identified. Of these, 110 are directly engaged with HMs in the course of the manufacturing process. This model-based prediction of occupational exposure represents an alternative tool that is both time-saving and cost-effective in comparison with direct air monitoring approaches. Following validation of the different models according to job processes, via comparison with direct measurements and exploration of correlations with biological monitoring, these estimates will allow a cumulative risk characterization.

  19. Simultaneous Video-EEG-ECG Monitoring to Identify Neurocardiac Dysfunction in Mouse Models of Epilepsy.

    PubMed

    Mishra, Vikas; Gautier, Nicole M; Glasscock, Edward

    2018-01-29

    In epilepsy, seizures can evoke cardiac rhythm disturbances such as heart rate changes, conduction blocks, asystoles, and arrhythmias, which can potentially increase risk of sudden unexpected death in epilepsy (SUDEP). Electroencephalography (EEG) and electrocardiography (ECG) are widely used clinical diagnostic tools to monitor for abnormal brain and cardiac rhythms in patients. Here, a technique to simultaneously record video, EEG, and ECG in mice to measure behavior, brain, and cardiac activities, respectively, is described. The technique described herein utilizes a tethered (i.e., wired) recording configuration in which the implanted electrode on the head of the mouse is hard-wired to the recording equipment. Compared to wireless telemetry recording systems, the tethered arrangement possesses several technical advantages such as a greater possible number of channels for recording EEG or other biopotentials; lower electrode costs; and greater frequency bandwidth (i.e., sampling rate) of recordings. The basics of this technique can also be easily modified to accommodate recording other biosignals, such as electromyography (EMG) or plethysmography for assessment of muscle and respiratory activity, respectively. In addition to describing how to perform the EEG-ECG recordings, we also detail methods to quantify the resulting data for seizures, EEG spectral power, cardiac function, and heart rate variability, which we demonstrate in an example experiment using a mouse with epilepsy due to Kcna1 gene deletion. Video-EEG-ECG monitoring in mouse models of epilepsy or other neurological disease provides a powerful tool to identify dysfunction at the level of the brain, heart, or brain-heart interactions.

  20. Embedded wireless sensors for turbomachine component defect monitoring

    DOEpatents

    Tralshawala, Nilesh; Sexton, Daniel White

    2015-11-24

    Various embodiments include detection systems adapted to monitor at least one physical property of a component in a turbomachine. In some embodiments a detection system includes at least one sensor configured to be affixed to a component of a turbomachine, the at least one sensor for sensing information regarding at least one physical property of the turbomachine component during operation of the turbomachine, a signal converter communicatively coupled to the at least one sensor and at least one RF communication device configured to be affixed to a stationary component of the turbomachine, the radio frequency communication device configured to communicate with the at least one signal converter via an RF antenna coupled to the signal converter.

  1. Functional Analysis for an Integrated Capability of Arrival/Departure/Surface Management with Tactical Runway Management

    NASA Technical Reports Server (NTRS)

    Phojanamongkolkij, Nipa; Okuniek, Nikolai; Lohr, Gary W.; Schaper, Meilin; Christoffels, Lothar; Latorella, Kara A.

    2014-01-01

    The runway is a critical resource of any air transport system. It is used for arrivals, departures, and for taxiing aircraft and is universally acknowledged as a constraining factor to capacity for both surface and airspace operations. It follows that investigation of the effective use of runways, both in terms of selection and assignment as well as the timing and sequencing of the traffic is paramount to the efficient traffic flows. Both the German Aerospace Center (DLR) and NASA have developed concepts and tools to improve atomic aspects of coordinated arrival/departure/surface management operations and runway configuration management. In December 2012, NASA entered into a Collaborative Agreement with DLR. Four collaborative work areas were identified, one of which is called "Runway Management." As part of collaborative research in the "Runway Management" area, which is conducted with the DLR Institute of Flight Guidance, located in Braunschweig, the goal is to develop an integrated system comprised of the three DLR tools - arrival, departure, and surface management (collectively referred to as A/D/S-MAN) - and NASA's tactical runway configuration management (TRCM) tool. To achieve this goal, it is critical to prepare a concept of operations (ConOps) detailing how the NASA runway management and DLR arrival, departure, and surface management tools will function together to the benefit of each. To assist with the preparation of the ConOps, the integrated NASA and DLR tools are assessed through a functional analysis method described in this report. The report first provides the highlevel operational environments for air traffic management (ATM) in Germany and in the U.S., and the descriptions of the DLR's A/D/S-MAN and NASA's TRCM tools at the level of details necessary to compliment the purpose of the study. Functional analyses of each tool and a completed functional analysis of an integrated system design are presented next in the report. Future efforts to fully develop the ConOps will include: developing scenarios to fully test environmental, procedural, and data availability assumptions; executing the analysis by a walk-through of the integrated system using these scenarios; defining the appropriate role of operators in terms of their monitoring requirements and decision authority; executing the analysis by a walk-through of the integrated system with operator involvement; characterizing the environmental, system data requirements, and operator role assumptions for the ConOps.

  2. Optimal visual-haptic integration with articulated tools.

    PubMed

    Takahashi, Chie; Watt, Simon J

    2017-05-01

    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.

  3. A multimedia perioperative record keeper for clinical research.

    PubMed

    Perrino, A C; Luther, M A; Phillips, D B; Levin, F L

    1996-05-01

    To develop a multimedia perioperative recordkeeper that provides: 1. synchronous, real-time acquisition of multimedia data, 2. on-line access to the patient's chart data, and 3. advanced data analysis capabilities through integrated, multimedia database and analysis applications. To minimize cost and development time, the system design utilized industry standard hardware components and graphical. software development tools. The system was configured to use a Pentium PC complemented with a variety of hardware interfaces to external data sources. These sources included physiologic monitors with data in digital, analog, video, and audio as well as paper-based formats. The development process was guided by trials in over 80 clinical cases and by the critiques from numerous users. As a result of this process, a suite of custom software applications were created to meet the design goals. The Perioperative Data Acquisition application manages data collection from a variety of physiological monitors. The Charter application provides for rapid creation of an electronic medical record from the patient's paper-based chart and investigator's notes. The Multimedia Medical Database application provides a relational database for the organization and management of multimedia data. The Triscreen application provides an integrated data analysis environment with simultaneous, full-motion data display. With recent technological advances in PC power, data acquisition hardware, and software development tools, the clinical researcher now has the ability to collect and examine a more complete perioperative record. It is hoped that the description of the MPR and its development process will assist and encourage others to advance these tools for perioperative research.

  4. User's Guide: Innovation Configurations for NSDC's Standards for Staff Development

    ERIC Educational Resources Information Center

    Roy, Patricia

    2007-01-01

    This 75-page guidebook is a companion to "Moving NSDC's Staff Development Standards into Practice: Innovation Configurations" Volumes I (ED522734) and II (ED522581). Innovation Configurations are a tool that helps educators better understand what the standards look like in practice. Roy, who co-authored the original volumes, introduces a process…

  5. The Influence of School Leadership on Classroom Participation: Examining Configurations of Organizational Supports

    ERIC Educational Resources Information Center

    Sebastian, James; Allensworth, Elaine; Stevens, David

    2014-01-01

    Background: In this paper we call for studying school leadership and its relationship to instruction and learning through approaches that highlight the role of configurations of multiple organizational supports. A configuration-focused approach to studying leadership and other essential supports provides a valuable addition to existing tools in…

  6. Identification of delaminations in composite: structural health monitoring software based on spectral estimation and hierarchical genetic algorithm

    NASA Astrophysics Data System (ADS)

    Nag, A.; Mahapatra, D. Roy; Gopalakrishnan, S.

    2003-10-01

    A hierarchical Genetic Algorithm (GA) is implemented in a high peformance spectral finite element software for identification of delaminations in laminated composite beams. In smart structural health monitoring, the number of delaminations (or any other modes of damage) as well as their locations and sizes are no way completely known. Only known are the healthy structural configuration (mass, stiffness and damping matrices updated from previous phases of monitoring), sensor measurements and some information about the load environment. To handle such enormous complexity, a hierarchical GA is used to represent heterogeneous population consisting of damaged structures with different number of delaminations and their evolution process to identify the correct damage configuration in the structures under monitoring. We consider this similarity with the evolution process in heterogeneous population of species in nature to develop an automated procedure to decide on what possible damaged configuration might have produced the deviation in the measured signals. Computational efficiency of the identification task is demonstrated by considering a single delamination. The behavior of fitness function in GA, which is an important factor for fast convergence, is studied for single and multiple delaminations. Several advantages of the approach in terms of computational cost is discussed. Beside tackling different other types of damage configurations, further scope of research for development of hybrid soft-computing modules are highlighted.

  7. MoKey: A versatile exergame creator for everyday usage.

    PubMed

    Eckert, Martina; López, Marcos; Lázaro, Carlos; Meneses, Juan

    2017-11-27

    Currently, virtual applications for physical exercises are highly appreciated as rehabilitation instruments. This article presents a middleware called "MoKey" (Motion Keyboard), which converts standard off-the-shelf software into exergames (exercise games). A configurable set of gestures, captured by a motion capture camera, is translated into the key strokes required by the chosen software. The present study assesses the tool regarding usability and viability on a heterogeneous group of 11 participants, aged 5 to 51, with moderate to severe disabilities, and mostly bound to a wheelchair. In comparison with FAAST (The Flexible Action and Articulated Skeleton Toolkit), MoKey achieved better results in terms of ease of use and computational load. The viability as an exergame creator tool was proven with help of four applications (PowerPoint®, e-book reader, Skype®, and Tetris). Success rates of up to 91% have been achieved, subjective perception was rated with 4.5 points (from 0-5). The middleware provides increased motivation due to the use of favorite software and the advantage of exploiting it for exercise. Used together with communication software or online games, social inclusion can be stimulated. The therapists can employ the tool to monitor the correctness and progress of the exercises.

  8. Grid Stability Awareness System (GSAS) Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuerborn, Scott; Ma, Jian; Black, Clifton

    The project team developed a software suite named Grid Stability Awareness System (GSAS) for power system near real-time stability monitoring and analysis based on synchrophasor measurement. The software suite consists of five analytical tools: an oscillation monitoring tool, a voltage stability monitoring tool, a transient instability monitoring tool, an angle difference monitoring tool, and an event detection tool. These tools have been integrated into one framework to provide power grid operators with both real-time or near real-time stability status of a power grid and historical information about system stability status. These tools are being considered for real-time use in themore » operation environment.« less

  9. Integrated cluster management at Manchester

    NASA Astrophysics Data System (ADS)

    McNab, Andrew; Forti, Alessandra

    2012-12-01

    We describe an integrated management system using third-party, open source components used in operating a large Tier-2 site for particle physics. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors the services on each host according to the records of what should be running; and cross references tickets with asset records and per-asset monitoring pages. In addition, scripts which detect problems and automatically remove hosts record these new states in the database which are available to operators immediately through the same interface as tickets and monitoring.

  10. Implementing a low-cost web-based clinical trial management system for community studies: a case study.

    PubMed

    Geyer, John; Myers, Kathleen; Vander Stoep, Ann; McCarty, Carolyn; Palmer, Nancy; DeSalvo, Amy

    2011-10-01

    Clinical trials with multiple intervention locations and a single research coordinating center can be logistically difficult to implement. Increasingly, web-based systems are used to provide clinical trial support with many commercial, open source, and proprietary systems in use. New web-based tools are available which can be customized without programming expertise to deliver web-based clinical trial management and data collection functions. To demonstrate the feasibility of utilizing low-cost configurable applications to create a customized web-based data collection and study management system for a five intervention site randomized clinical trial establishing the efficacy of providing evidence-based treatment via teleconferencing to children with attention-deficit hyperactivity disorder. The sites are small communities that would not usually be included in traditional randomized trials. A major goal was to develop database that participants could access from computers in their home communities for direct data entry. Discussed is the selection process leading to the identification and utilization of a cost-effective and user-friendly set of tools capable of customization for data collection and study management tasks. An online assessment collection application, template-based web portal creation application, and web-accessible Access 2007 database were selected and customized to provide the following features: schedule appointments, administer and monitor online secure assessments, issue subject incentives, and securely transmit electronic documents between sites. Each tool was configured by users with limited programming expertise. As of June 2011, the system has successfully been used with 125 participants in 5 communities, who have completed 536 sets of assessment questionnaires, 8 community therapists, and 11 research staff at the research coordinating center. Total automation of processes is not possible with the current set of tools as each is loosely affiliated, creating some inefficiency. This system is best suited to investigations with a single data source e.g., psychosocial questionnaires. New web-based applications can be used by investigators with limited programming experience to implement user-friendly, efficient, and cost-effective tools for multi-site clinical trials with small distant communities. Such systems allow the inclusion in research of populations that are not usually involved in clinical trials.

  11. A new monitor set for the determination of neutron flux parameters in short-time k0-NAA

    NASA Astrophysics Data System (ADS)

    Kubešová, Marie; Kučera, Jan; Fikrle, Marek

    2011-11-01

    Multipurpose research reactors such as LVR-15 in Řež require monitoring of the neutron flux parameters (f, α) in each batch of samples analyzed when k0 standardization in NAA is to be used. The above parameters may change quite unpredictably, because experiments in channels adjacent to those used for NAA require an adjustment of the reactor operation parameters and/or active core configuration. For frequent monitoring of the neutron flux parameters the bare multi-monitor method is very convenient. The well-known Au-Zr tri-isotopic monitor set that provides a good tool for determining f and α after long-time irradiation is not optimal in case of short-time irradiation because only a low activity of the 95Zr radionuclide is formed. Therefore, several elements forming radionuclides with suitable half-lives and Q0 and Ēr parameters in a wide range of values were tested, namely 198Au, 56Mn, 88Rb, 128I, 139Ba, and 239U. As a result, an optimal mixture was selected consisting of Au, Mn, and Rb to form a well suited monitor set for irradiation at a thermal neutron fluence rate of 3×1017 m-2 s-1. The procedure of short-time INAA with the new monitor set for k0 standardization was successfully validated using the synthetic reference material SMELS 1 and several matrix reference materials (RMs) representing matrices of sample types frequently analyzed in our laboratory. The results were obtained using the Kayzero for Windows program.

  12. Approaches to decrease the level of parasitic noise over vibroacoustic channel in terms of configuring information security tools

    NASA Astrophysics Data System (ADS)

    Ivanov, A. V.; Reva, I. L.; Babin, A. A.

    2018-04-01

    The article deals with influence of various ways to place vibration transmitters on efficiency of rooms safety for negotiations. Standing for remote vibration listening of window glass, electro-optical channel, the most typical technical channel of information leakage, was investigated. The modern system “Sonata-AB” of 4B model is used as an active protection tool. Factors influencing on security tools configuration efficiency have been determined. The results allow utilizer to reduce masking interference level as well as parasitic noise with keeping properties of room safety.

  13. Evaluation of Earthquake Detection Performance in Terms of Quality and Speed in SEISCOMP3 Using New Modules Qceval, Npeval and Sceval

    NASA Astrophysics Data System (ADS)

    Roessler, D.; Weber, B.; Ellguth, E.; Spazier, J.

    2017-12-01

    The geometry of seismic monitoring networks, site conditions and data availability as well as monitoring targets and strategies typically impose trade-offs between data quality, earthquake detection sensitivity, false detections and alert times. Network detection capabilities typically change with alteration of the seismic noise level by human activity or by varying weather and sea conditions. To give helpful information to operators and maintenance coordinators, gempa developed a range of tools to evaluate earthquake detection and network performance including qceval, npeval and sceval. qceval is a module which analyzes waveform quality parameters in real-time and deactivates and reactivates data streams based on waveform quality thresholds for automatic processing. For example, thresholds can be defined for latency, delay, timing quality, spikes and gaps count and rms. As changes in the automatic processing have a direct influence on detection quality and speed, another tool called "npeval" was designed to calculate in real-time the expected time needed to detect and locate earthquakes by evaluating the effective network geometry. The effective network geometry is derived from the configuration of stations participating in the detection. The detection times are shown as an additional layer on the map and updated in real-time as soon as the effective network geometry changes. Yet another new tool, "sceval", is an automatic module which classifies located seismic events (Origins) in real-time. sceval evaluates the spatial distribution of the stations contributing to an Origin. It confirms or rejects the status of Origins, adds comments or leaves the Origin unclassified. The comments are passed to an additional sceval plug-in where the end user can customize event types. This unique identification of real and fake events in earthquake catalogues allows to lower network detection thresholds. In real-time monitoring situations operators can limit the processing to events with unclassified Origins, reducing their workload. Classified Origins can be treated specifically by other procedures. These modules have been calibrated and fully tested by several complex seismic monitoring networks in the region of Indonesia and Northern Chile.

  14. International Radiation Monitoring and Information System (IRMIS)

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Baciu, Florian; Stowisek, Jan; Saluja, Gurdeep; Kenny, Patrick; Albinet, Franck

    2017-09-01

    This article describes the International Radiation Monitoring Information System (IRMIS) which was developed by the International Atomic Energy Agency (IAEA) with the goal to provide Competent Authorities, the IAEA and other international organizations with a client server based web application to share and visualize large quantities of radiation monitoring data. The data maps the areas of potential impact that can assist countries to take appropriate protective actions in an emergency. Ever since the Chernobyl nuclear power plant accident in April of 19861 European Community (EC) has worked towards collecting routine environmental radiological monitoring data from national networked monitoring systems. European Radiological Data Exchange Platform (EURDEP) was created in 19952 to that end - to provide radiation monitoring data from most European countries reported in nearly real-time. During the response operations for the Fukushima Dai-ichi nuclear power plant accident (March 2011) the IAEA Incident and Emergency Centre (IEC) managed, harmonized and shared the large amount of data that was being generated from different organizations. This task underscored the need for a system which allows sharing large volumes of radiation monitoring data in an emergency. In 2014 EURDEP started the submission of the European radiological data to the International Radiation Monitoring Information System (IRMIS) as a European Regional HUB for IRMIS. IRMIS supports the implementation of the Convention on Early Notification of a Nuclear Accident by providing a web application for the reporting, sharing, visualizing and analysing of large quantities of environmental radiation monitoring data during nuclear or radiological emergencies. IRMIS is not an early warning system that automatically reports when there are significant deviations in radiation levels or when values are detected above certain levels. However, the configuration of the visualization features offered by IRMIS may help Member States to determine where elevated gamma dose rate measurements during a radiological or nuclear emergency indicate that actions to protect the public are necessary. The data can be used to assist emergency responders determine where and when to take necessary actions to protect the public. This new web online tool supports the IAEA's Unified System for Information Exchange in Incidents and Emergencies (USIE)3, an online tool where competent authorities can access information about all emergency situations, ranging from a lost radioactive source to a full-scale nuclear emergency.

  15. Stuck threaded member extractor tool and extraction methods

    DOEpatents

    Roscosky, James M.; Essay, Shane M.

    2016-02-02

    Disclosed is a tool having a tapered first portion configured to translate a rotational force to the stuck member, a second portion connecting with the first portion and configured to translate the rotational force to the tapered first portion, a planar tip at an end of the first portion and perpendicular to a central axis passing through the first portion and the second portion, a plurality of left-handed splines extending helically around the central axis from the tip toward the second portion, a driver engaged with the second portion and configured to receive a third rotational force from a mechanical manipulator, and a leak seal connected to the driver and configured to form a seal around the stuck member and at least a portion of the driver and prevent gases opposite the stuck member from escaping.

  16. Data Link Test and Analysis System/TCAS Monitor User's Guide

    DOT National Transportation Integrated Search

    1991-02-01

    This document is a user's guide for the Data Link Test and Analysis System : (DATAS) Traffic Alert and Collision Avoidance System (TCAS) monitor application. : It provides a brief overall hardware description of DATAS configured as a TCAS : Monitor, ...

  17. EMIR: a configurable hierarchical system for event monitoring and incident response

    NASA Astrophysics Data System (ADS)

    Deich, William T. S.

    2014-07-01

    The Event Monitor and Incident Response system (emir) is a flexible, general-purpose system for monitoring and responding to all aspects of instrument, telescope, and general facility operations, and has been in use at the Automated Planet Finder telescope for two years. Responses to problems can include both passive actions (e.g. generating alerts) and active actions (e.g. modifying system settings). Emir includes a monitor-and-response daemon, plus graphical user interfaces and text-based clients that automatically configure themselves from data supplied at runtime by the daemon. The daemon is driven by a configuration file that describes each condition to be monitored, the actions to take when the condition is triggered, and how the conditions are aggregated into hierarchical groups of conditions. Emir has been implemented for the Keck Task Library (KTL) keyword-based systems used at Keck and Lick Observatories, but can be readily adapted to many event-driven architectures. This paper discusses the design and implementation of Emir , and the challenges in balancing the competing demands for simplicity, flexibility, power, and extensibility. Emir 's design lends itself well to multiple purposes, and in addition to its core monitor and response functions, it provides an effective framework for computing running statistics, aggregate values, and summary state values from the primitive state data generated by other subsystems, and even for creating quick-and-dirty control loops for simple systems.

  18. EMC system test performance on Spacelab

    NASA Astrophysics Data System (ADS)

    Schwan, F.

    1982-07-01

    Electromagnetic compatibility testing of the Spacelab engineering model is discussed. Documentation, test procedures (including data monitoring and test configuration set up) and performance assessment approach are described. Equipment was assembled into selected representative flight configurations. The physical and functional interfaces between the subsystems were demonstrated within the integration and test sequence which culminated in the flyable configuration Long Module plus one Pallet.

  19. Increasing Psychotherapists’ Adoption and Implementation of the Evidence-based Practice of Progress Monitoring

    PubMed Central

    Persons, Jacqueline B.; Koerner, Kelly; Eidelman, Polina; Thomas, Cannon; Liu, Howard

    2015-01-01

    Evidence-based practices (EBPs) reach consumers slowly because practitioners are slow to adopt and implement them. We hypothesized that giving psychotherapists a tool + training intervention that was designed to help the therapist integrate the EBP of progress monitoring into his or her usual way of working would be associated with adoption and sustained implementation of the particular progress monitoring tool we trained them to use (the Depression Anxiety Stress Scales on our Online Progress Tracking tool) and would generalize to all types of progress monitoring measures. To test these hypotheses, we developed an online progress monitoring tool and a course that trained psychotherapists to use it, and we assessed progress monitoring behavior in 26 psychotherapists before, during, immediately after, and 12 months after they received the tool and training. Immediately after receiving the tool + training intervention, participants showed statistically significant increases in use of the online tool and of all types of progress monitoring measures. Twelve months later, participants showed sustained use of any type of progress monitoring measure but not the online tool. PMID:26618237

  20. Increasing psychotherapists' adoption and implementation of the evidence-based practice of progress monitoring.

    PubMed

    Persons, Jacqueline B; Koerner, Kelly; Eidelman, Polina; Thomas, Cannon; Liu, Howard

    2016-01-01

    Evidence-based practices (EBPs) reach consumers slowly because practitioners are slow to adopt and implement them. We hypothesized that giving psychotherapists a tool + training intervention that was designed to help the therapist integrate the EBP of progress monitoring into his or her usual way of working would be associated with adoption and sustained implementation of the particular progress monitoring tool we trained them to use (the Depression Anxiety Stress Scales on our Online Progress Tracking tool) and would generalize to all types of progress monitoring measures. To test these hypotheses, we developed an online progress monitoring tool and a course that trained psychotherapists to use it, and we assessed progress monitoring behavior in 26 psychotherapists before, during, immediately after, and 12 months after they received the tool and training. Immediately after receiving the tool + training intervention, participants showed statistically significant increases in use of the online tool and of all types of progress monitoring measures. Twelve months later, participants showed sustained use of any type of progress monitoring measure but not the online tool. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Narrative Configuration: Some Notes on the Workings of Hindsight

    ERIC Educational Resources Information Center

    Kvernbekk, Tone

    2013-01-01

    In this paper I analyze the role of hindsight in narrative configuration. Configuration means the grasping together of disparate elements into a coherent whole. I argue that hindsight, importantly, brings the temporal constraints on what we can know to the fore, but is a double-edged sword. On the one hand, hindsight is an indispensable tool both…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, T.

    SPI/U3.1 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Inspector Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, Tony

    SPI/U3.2 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Authentication Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  4. Fastener starter tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Arnett, Michael C. (Inventor); Garton, Harry L. (Inventor); Valentino, William D. (Inventor)

    2003-01-01

    A fastener starter tool includes a number of spring retention fingers for retaining a small part, or combination of parts. The tool has an inner housing, which holds the spring retention fingers, a hand grip, and an outer housing configured to slide over the inner housing and the spring retention fingers toward and away from the hand grip, exposing and opening, or respectively, covering and closing, the spring retention fingers. By sliding the outer housing toward (away from) the hand grip, a part can be released from (retained by) the tool. The tool may include replaceable inserts, for retaining parts, such as screws, and configured to limit the torque applied to the part, to prevent cross threading. The inner housing has means to transfer torque from the hand grip to the insert. The tool may include replaceable bits, the inner housing having means for transferring torque to the replaceable bit.

  5. SPI/U3.2. Security Profile Inspector for UNIX Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, A.

    1994-08-01

    SPI/U3.2 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Authentication Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  6. Data link test and analysis system/TCAS monitor user's guide

    NASA Astrophysics Data System (ADS)

    Vandongen, John; Wapelhorst, Leo

    1991-02-01

    This document is a user's guide for the Data Link Test and Analysis System (DATAS) Traffic Alert and Collision Avoidance System (TCAS) monitor. It provides a brief overall hardware description of DATAS configured as a TCAS monitor, and the applications software.

  7. Hydrogeophysical monitoring of water infiltration processes

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Ivan; Cassiani, Giorgio; Deiana, Rita; Canone, Davide; Previati, Maurizio

    2010-05-01

    Non-invasive subsurface monitoring is growing in the last years. Techniques like ground-penetrating radar (GPR) and electrical resistivity tomography (ERT) can be useful in soil water content monitoring (e.g., Vereecken et al., 2006). Some problems remain (e.g. spatial resolution), but the scale is consistent with many applications and hydrological models. The research has to to provide even more quantitative tools, without remaining in the qualitative realm. This is a very crucial step in the way to provide data useful for hydrological modeling. In this work a controlled field infiltration experiment has been done in August 2009 in the experimental site of Grugliasco, close to the Agricultural Faculty of the University of Torino, Italy. The infiltration has been monitored in time lapse by ERT, GPR, and TDR (Time Domain Reflectometry). The sandy soil characteristics of the site has been already described in another experiment [Cassiani et al. 2009a].The ERT was èperformed in dipole-dipole configuration, while the GPR had 100 MHz and 500 MHz antennas in WARR configuration. The TDR gages had different lengths. The amount of water which was sprinkled was also monitored in time.Irrigation intensity has been always smaller than infiltration capacity, in order not toh ave any surface ponding. Spectral induced polarization has been used to infer constitutive parameters from soil samples [Cassiani et al. 2009b]. 2D Richards equation model (Manzini and Ferraris, 2004) has been then calibrated with the measurements. References. Cassiani, G., S. Ferraris, M. Giustiniani, R. Deiana and C.Strobbia, 2009a, Time-lapse surface-to-surface GPR measurements to monitor a controlled infiltration experiment, in press, Bollettino di Geofisica Teorica ed Applicata, Vol. 50, 2 Marzo 2009, pp. 209-226. Cassiani, G., A. Kemna, A.Villa, and E. Zimmermann, 2009b, Spectral induced polarization for the characterization of free-phase hydrocarbon contamination in sediments with low clay content, Near Surface Geophysics, special issue on Hydrogeophysics, p. 547-562. Manzini G., and Ferraris S. 2004. Mass-conservative finite-volume methods on 2-D unstructured grids for the Richards equation, 'Advances in Water Resources' 27(12):1199-1215, 2004. content with ground penetrating radar: A review. Vadose Zone Journal 2, 476-491. Vereecken H., Binley A., Cassiani G., Kharkhordin I., Revil A. and Titov K. 2006. Applied Hydrogeophysics. Springer-Verlag.

  8. Low cost monitoring from space using Landsat TM time series and open source technologies: the case study of Iguazu park

    NASA Astrophysics Data System (ADS)

    Nole, Gabriele; Lasaponara, Rosa

    2015-04-01

    Up to nowadays, satellite data have become increasingly available, thus offering a low cost or even free of charge unique tool, with a great potential for operational monitoring of vegetation cover, quantitative assessment of urban expansion and urban sprawl, as well as for monitoring of land use changes and soil consumption. This growing observational capacity has also highlighted the need for research efforts aimed at exploring the potential offered by data processing methods and algorithms, in order to exploit as much as possible this invaluable space-based data source. The work herein presented concerns an application study on the monitoring of vegetation cover and urban sprawl conducted with the use of satellite Landsat TM data. The selected test site is the Iguazu park highly significant, being it one of the most threatened global conservation priorities (http://whc.unesco.org/en/list/303/). In order to produce synthetic maps of the investigated areas to monitor the status of vegetation and ongoing subtle changes, satellite Landsat TM data images were classified using two automatic classifiers, Maximum Likelihood (MLC) and Support Vector Machines (SVMs) applied by changing setting parameters, with the aim to compare their respective performances in terms of robustness, speed and accuracy. All process steps have been developed integrating Geographical Information System and Remote Sensing, and adopting free and open source software. Results pointed out that the SVM classifier with RBF kernel was generally the best choice (with accuracy higher than 90%) among all the configurations compared, and the use of multiple bands globally improves classification. One of the critical elements found in the case of monitoring of urban area expansion is given by the presence of urban garden mixed with urban fabric. The use of different configurations for the SVMs, i.e. different kernels and values of the setting parameters, allowed us to calibrate the classifier also to cope with a specific need, as in our case, to achieve a reliable discrimination of urban from non urban areas. Acknowledgement This research was performed within the framework of the Great relevance project " Smart management of cultural heritage sites in Italy and Argentina: Earth Observation and pilot projects funded by the Ministero degli Affari Esteri e della Cooperazione Internazionale --MAE, 17/04/2014, Prot. nr. 0090692, 2014-2016

  9. Ver-i-Fus: an integrated access control and information monitoring and management system

    NASA Astrophysics Data System (ADS)

    Thomopoulos, Stelios C.; Reisman, James G.; Papelis, Yiannis E.

    1997-01-01

    This paper describes the Ver-i-Fus Integrated Access Control and Information Monitoring and Management (IAC-I2M) system that INTELNET Inc. has developed. The Ver-i-Fus IAC-I2M system has been designed to meet the most stringent security and information monitoring requirements while allowing two- way communication between the user and the system. The systems offers a flexible interface that permits to integrate practically any sensing device, or combination of sensing devices, including a live-scan fingerprint reader, thus providing biometrics verification for enhanced security. Different configurations of the system provide solutions to different sets of access control problems. The re-configurable hardware interface, tied together with biometrics verification and a flexible interface that allows to integrate Ver-i-Fus with an MIS, provide an integrated solution to security, time and attendance, labor monitoring, production monitoring, and payroll applications.

  10. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  11. Precision replenishable grinding tool and manufacturing process

    DOEpatents

    Makowiecki, D.M.; Kerns, J.A.; Blaedel, K.L.; Colella, N.J.; Davis, P.J.; Juntz, R.S.

    1998-06-09

    A reusable grinding tool consisting of a replaceable single layer of abrasive particles intimately bonded to a precisely configured tool substrate, and a process for manufacturing the grinding tool are disclosed. The tool substrate may be ceramic or metal and the abrasive particles are preferably diamond, but may be cubic boron nitride. The manufacturing process involves: coating a configured tool substrate with layers of metals, such as titanium, copper and titanium, by physical vapor deposition (PVD); applying the abrasive particles to the coated surface by a slurry technique; and brazing the abrasive particles to the tool substrate by alloying the metal layers. The precision control of the composition and thickness of the metal layers enables the bonding of a single layer or several layers of micron size abrasive particles to the tool surface. By the incorporation of an easily dissolved metal layer in the composition such allows the removal and replacement of the abrasive particles, thereby providing a process for replenishing a precisely machined grinding tool with fine abrasive particles, thus greatly reducing costs as compared to replacing expensive grinding tools. 11 figs.

  12. Precision replenishable grinding tool and manufacturing process

    DOEpatents

    Makowiecki, Daniel M.; Kerns, John A.; Blaedel, Kenneth L.; Colella, Nicholas J.; Davis, Pete J.; Juntz, Robert S.

    1998-01-01

    A reusable grinding tool consisting of a replaceable single layer of abrasive particles intimately bonded to a precisely configured tool substrate, and a process for manufacturing the grinding tool. The tool substrate may be ceramic or metal and the abrasive particles are preferably diamond, but may be cubic boron nitride. The manufacturing process involves: coating a configured tool substrate with layers of metals, such as titanium, copper and titanium, by physical vapor deposition (PVD); applying the abrasive particles to the coated surface by a slurry technique; and brazing the abrasive particles to the tool substrate by alloying the metal layers. The precision control of the composition and thickness of the metal layers enables the bonding of a single layer or several layers of micron size abrasive particles to the tool surface. By the incorporation of an easily dissolved metal layer in the composition such allows the removal and replacement of the abrasive particles, thereby providing a process for replenishing a precisely machined grinding tool with fine abrasive particles, thus greatly reducing costs as compared to replacing expensive grinding tools.

  13. The Past, Present, and Future of Configuration Management

    DTIC Science & Technology

    1992-07-01

    surveying the tools and environments, it is possible to extract a set of 15 CM concepts [12] that capture the essence of automated support for CM. These...tools in maintaining the configuration’s integrity, as in Jasmine [20]. 9. Subsystem: provide a means to limit the effect of changes and recompilation...workspace facility. Thus, the ser- vices model, in essence , is intended to provide plug in/plug out, "black box" capabilities. The initial set of 50

  14. Analyses of integrated aircraft cabin contaminant monitoring network based on Kalman consensus filter.

    PubMed

    Wang, Rui; Li, Yanxiao; Sun, Hui; Chen, Zengqiang

    2017-11-01

    The modern civil aircrafts use air ventilation pressurized cabins subject to the limited space. In order to monitor multiple contaminants and overcome the hypersensitivity of the single sensor, the paper constructs an output correction integrated sensor configuration using sensors with different measurement theories after comparing to other two different configurations. This proposed configuration works as a node in the contaminant distributed wireless sensor monitoring network. The corresponding measurement error models of integrated sensors are also proposed by using the Kalman consensus filter to estimate states and conduct data fusion in order to regulate the single sensor measurement results. The paper develops the sufficient proof of the Kalman consensus filter stability when considering the system and the observation noises and compares the mean estimation and the mean consensus errors between Kalman consensus filter and local Kalman filter. The numerical example analyses show the effectiveness of the algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Git Replacement for the

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, P.

    2014-09-23

    GRAPE is a tool for managing software project workflows for the Git version control system. It provides a suite of tools to simplify and configure branch based development, integration with a project's testing suite, and integration with the Atlassian Stash repository hosting tool.

  16. Supersonic civil airplane study and design: Performance and sonic boom

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.

  17. Control system for high power laser drilling workover and completion unit

    DOEpatents

    Zediker, Mark S; Makki, Siamak; Faircloth, Brian O; DeWitt, Ronald A; Allen, Erik C; Underwood, Lance D

    2015-05-12

    A control and monitoring system controls and monitors a high power laser system for performing high power laser operations. The control and monitoring system is configured to perform high power laser operation on, and in, remote and difficult to access locations.

  18. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  19. Analytical & Experimental Study of Radio Frequency Cavity Beam Profile Monitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcazar, Mario D.; Yonehara, Katsuya

    The purpose of this analytical and experimental study is multifold: 1) To explore a new, radiation-robust, hadron beam profile monitor for intense neutrino beam applications; 2) To test, demonstrate, and develop a novel gas-filled Radio-Frequency (RF) cavity to use in this monitoring system. Within this context, the first section of the study analyzes the beam distribution across the hadron monitor as well as the ion-production rate inside the RF cavity. Furthermore a more effecient pixel configuration across the hadron monitor is proposed to provide higher sensitivity to changes in beam displacement. Finally, the results of a benchtop test of themore » tunable quality factor RF cavity will be presented. The proposed hadron monitor configuration consists of a circular array of RF cavities located at a radial distance of 7cm { corresponding to the standard deviation of the beam due to scatering { and a gas-filled RF cavity with a quality factor in the range 400 - 800.« less

  20. Off-the-shelf real-time monitoring of satellite constellations in a visual 3-D environment

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Hervias, Felipe; Cheng, Cecilia Han; Mactutis, Anthony; Angelino, Robert

    1996-01-01

    The multimission spacecraft analysis system (MSAS) data monitor is a generic software product for future real-time data monitoring and analysis. The system represents the status of a satellite constellation through the shape, color, motion and position of graphical objects floating in a three dimensional virtual reality environment. It may be used for the monitoring of large volumes of data, for viewing results in configurable displays, and for providing high level and detailed views of a constellation of monitored satellites. It is considered that the data monitor is an improvement on conventional graphic and text-based displays as it increases the amount of data that the operator can absorb in a given period, and can be installed and configured without the requirement for software development by the end user. The functionality of the system is described, including: the navigation abilities; the representation of alarms in the cybergrid; limit violation; real-time trend analysis, and alarm status indication.

  1. A rule-based expert system for generating control displays at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Coulter, Karen J.

    1994-12-01

    The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool.

  2. DRIFTER Web App Development Support

    NASA Technical Reports Server (NTRS)

    Davis, Derrick D.; Armstrong, Curtis D.

    2015-01-01

    During my 2015 internship at Stennis Space Center (SSC) I supported the development of a web based tool to enable user interaction with a low-cost environmental monitoring buoy called the DRIFTER. DRIFTERs are designed by SSC's Applied Science and Technology Projects branch and are used to measure parameters such as water temperature and salinity. Data collected by the buoys help verify measurements by NASA satellites, which contributes to NASA's mission to advance understanding of the Earth by developing technologies to improve the quality of life on or home planet. My main objective during this internship was to support the development of the DRIFTER by writing web-based software that allows the public to view and access data collected by the buoys. In addition, this software would enable DRIFTER owners to configure and control the devices.

  3. Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.

    PubMed

    Ozaki, Nobuyuki

    2002-07-01

    This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.

  4. Benefits Assessment for Tactical Runway Configuration Management Tool

    NASA Technical Reports Server (NTRS)

    Oseguera-Lohr, Rosa; Phojanamongkolkij, Nipa; Lohr, Gary; Fenbert, James W.

    2013-01-01

    The Tactical Runway Configuration Management (TRCM) software tool was developed to provide air traffic flow managers and supervisors with recommendations for airport configuration changes and runway usage. The objective for this study is to conduct a benefits assessment at Memphis (MEM), Dallas Fort-Worth (DFW) and New York's John F. Kennedy (JFK) airports using the TRCM tool. Results from simulations using the TRCM-generated runway configuration schedule are compared with results using historical schedules. For the 12 days of data used in this analysis, the transit time (arrival fix to spot on airport movement area for arrivals, or spot to departure fix for departures) for MEM departures is greater (7%) than for arrivals (3%); for JFK, there is a benefit for arrivals (9%) but not for departures (-2%); for DFW, arrivals show a slight benefit (1%), but this is offset by departures (-2%). Departure queue length benefits show fewer aircraft in queue for JFK (29%) and MEM (11%), but not for DFW (-13%). Fuel savings for surface operations at MEM are seen for both arrivals and departures. At JFK there are fuel savings for arrivals, but these are offset by increased fuel use for departures. In this study, no surface fuel benefits resulted for DFW. Results suggest that the TRCM algorithm requires modifications for complex surface traffic operations that can cause taxi delays. For all three airports, the average number of changes in flow direction (runway configuration) recommended by TRCM was many times greater than the historical data; TRCM would need to be adapted to a particular airport's needs, to limit the number of changes to acceptable levels. The results from this analysis indicate the TRCM tool can provide benefits at some high-capacity airports. The magnitude of these benefits depends on many airport-specific factors and would require adaptation of the TRCM tool; a detailed assessment is needed prior to determining suitability for a particular airport.

  5. SensorKit: An End-to-End Solution for Environmental Sensor Networking

    NASA Astrophysics Data System (ADS)

    Silva, F.; Graham, E.; Deschon, A.; Lam, Y.; Goldman, J.; Wroclawski, J.; Kaiser, W.; Benzel, T.

    2008-12-01

    Modern day sensor network technology has shown great promise to transform environmental data collection. However, despite the promise, these systems have remained the purview of the engineers and computer scientists who design them rather than a useful tool for the environmental scientists who need them. SensorKit is conceived of as a way to make wireless sensor networks accessible to The People: it is an advanced, powerful tool for sensor data collection that does not require advanced technological know-how. We are aiming to make wireless sensor networks for environmental science as simple as setting up a standard home computer network by providing simple, tested configurations of commercially-available hardware, free and easy-to-use software, and step-by-step tutorials. We designed and built SensorKit using a simplicity-through-sophistication approach, supplying users a powerful sensor to database end-to-end system with a simple and intuitive user interface. Our objective in building SensorKit was to make the prospect of using environmental sensor networks as simple as possible. We built SensorKit from off the shelf hardware components, using the Compact RIO platform from National Instruments for data acquisition due to its modular architecture and flexibility to support a large number of sensor types. In SensorKit, we support various types of analog, digital and networked sensors. Our modular software architecture allows us to abstract sensor details and provide users a common way to acquire data and to command different types of sensors. SensorKit is built on top of the Sensor Processing and Acquisition Network (SPAN), a modular framework for acquiring data in the field, moving it reliably to the scientist institution, and storing it in an easily-accessible database. SPAN allows real-time access to the data in the field by providing various options for long haul communication, such as cellular and satellite links. Our system also features reliable data storage and transmission, using a custody transfer mechanism that ensures data is retained until successful delivery to the scientist can be confirmed. The ability for the scientist to communicate in real-time with the sensor network in the field enables remote sensor reconfiguration and system health and status monitoring. We use a spiral approach of design, test, deploy and revise, and, by going to the field frequently and getting feedback from field scientists, we have been able to include additional functionality that is useful to the scientist while ensuring SensorKit remains intuitive to operate. Users can configure, control, and monitor SensorKit using a number of tools we have developed. An intuitive user interface running on a desktop or laptop allows scientists to setup the system, add and configure sensors, and specify when and how the data will be collected. We also have a mobile version of our interface that runs on a PDA and lets scientists calibrate sensors and "tune" the system while in the field, allowing for data validation before leaving the field and returning to the research lab. SensorKit also features SensorBase, an intuitive user interface built on top of a standard SQL database, which allows scientists to store and share their data with other researchers. SensorKit has been used for diverse scientific applications and deployed throughout the world: from studying mercury cycling in rice paddies in China, to ecological research in the neotropical rainforests of Costa Rica, to monitoring the contamination of salt lakes in Argentina.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christoph, G.G; Jackson, K.A.; Neuman, M.C.

    An effective method for detecting computer misuse is the automatic auditing and analysis of on-line user activity. This activity is reflected in the system audit record, by changes in the vulnerability posture of the system configuration, and in other evidence found through active testing of the system. In 1989 we started developing an automatic misuse detection system for the Integrated Computing Network (ICN) at Los Alamos National Laboratory. Since 1990 this system has been operational, monitoring a variety of network systems and services. We call it the Network Anomaly Detection and Intrusion Reporter, or NADIR. During the last year andmore » a half, we expanded NADIR to include processing of audit and activity records for the Cray UNICOS operating system. This new component is called the UNICOS Real-time NADIR, or UNICORN. UNICORN summarizes user activity and system configuration information in statistical profiles. In near real-time, it can compare current activity to historical profiles and test activity against expert rules that express our security policy and define improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. UNICORN is currently operational on four Crays in Los Alamos` main computing network, the ICN.« less

  7. Software Configuration Management Plan for the B-Plant Canyon Ventilation Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCDANIEL, K.S.

    1999-08-31

    Project W-059 installed a new B Plant Canyon Ventilation System. Monitoring and control of the system is implemented by the Canyon Ventilation Control System (CVCS). This Software Configuration Management Plan provides instructions for change control of the CVCS.

  8. 40 CFR 75.72 - Determination of NOX mass emissions for common stack and multiple stack configurations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the hourly stack flow rate (in scfh). Only one methodology for determining NOX mass emissions shall be...-diluent continuous emissions monitoring system and a flow monitoring system in the common stack, record... maintain a flow monitoring system and diluent monitor in the duct to the common stack from each unit; or...

  9. Perspectives on Wellness Self-Monitoring Tools for Older Adults

    PubMed Central

    Huh, Jina; Le, Thai; Reeder, Blaine; Thompson, Hilaire J.; Demiris, George

    2013-01-01

    Purpose Our purpose was to understand different stakeholder perceptions about the use of self-monitoring tools, specifically in the area of older adults’ personal wellness. In conjunction with the advent of personal health records, tracking personal health using self-monitoring technologies shows promising patient support opportunities. While clinicians’ tools for monitoring of older adults have been explored, we know little about how older adults may self-monitor their wellness and health and how their health care providers would perceive such use. Methods We conducted three focus groups with health care providers (n=10) and four focus groups with community-dwelling older adults (n=31). Results Older adult participants’ found the concept of self-monitoring unfamiliar and this influenced a narrowed interest in the use of wellness self-monitoring tools. On the other hand, health care provider participants showed open attitudes towards wellness monitoring tools for older adults and brainstormed about various stakeholders’ use cases. The two participant groups showed diverging perceptions in terms of: perceived uses, stakeholder interests, information ownership and control, and sharing of wellness monitoring tools. Conclusions Our paper provides implications and solutions for how older adults’ wellness self-monitoring tools can enhance patient-health care provider interaction, patient education, and improvement in overall wellness. PMID:24041452

  10. Perspectives on wellness self-monitoring tools for older adults.

    PubMed

    Huh, Jina; Le, Thai; Reeder, Blaine; Thompson, Hilaire J; Demiris, George

    2013-11-01

    Our purpose was to understand different stakeholder perceptions about the use of self-monitoring tools, specifically in the area of older adults' personal wellness. In conjunction with the advent of personal health records, tracking personal health using self-monitoring technologies shows promising patient support opportunities. While clinicians' tools for monitoring of older adults have been explored, we know little about how older adults may self-monitor their wellness and health and how their health care providers would perceive such use. We conducted three focus groups with health care providers (n=10) and four focus groups with community-dwelling older adults (n=31). Older adult participants' found the concept of self-monitoring unfamiliar and this influenced a narrowed interest in the use of wellness self-monitoring tools. On the other hand, health care provider participants showed open attitudes toward wellness monitoring tools for older adults and brainstormed about various stakeholders' use cases. The two participant groups showed diverging perceptions in terms of: perceived uses, stakeholder interests, information ownership and control, and sharing of wellness monitoring tools. Our paper provides implications and solutions for how older adults' wellness self-monitoring tools can enhance patient-health care provider interaction, patient education, and improvement in overall wellness. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. A Summary of the NASA Design Environment for Novel Vertical Lift Vehicles (DELIVER) Project

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.

    2018-01-01

    The number of new markets and use cases being developed for vertical take-off and landing vehicles continues to explode, including the highly publicized urban air taxi and package deliver applications. There is an equally exploding variety of novel vehicle configurations and sizes that are being proposed to fill these new market applications. The challenge for vehicle designers is that there is currently no easy and consistent way to go from a compelling mission or use case to a vehicle that is best configured and sized for the particular mission. This is because the availability of accurate and validated conceptual design tools for these novel types and sizes of vehicles have not kept pace with the new markets and vehicles themselves. The Design Environment for Novel Vertical Lift Vehicles (DELIVER) project was formulated to address this vehicle design challenge by demonstrating the use of current conceptual design tools, that have been used for decades to design and size conventional rotorcraft, applied to these novel vehicle types, configurations and sizes. In addition to demonstrating the applicability of current design and sizing tools to novel vehicle configurations and sizes, DELIVER also demonstrated the addition of key transformational technologies of noise, autonomy, and hybrid-electric and all-electric propulsion into the vehicle conceptual design process. Noise is key for community acceptance, autonomy and the need to operate autonomously are key for efficient, reliable and safe operations, and electrification of the propulsion system is a key enabler for these new vehicle types and sizes. This paper provides a summary of the DELIVER project and shows the applicability of current conceptual design and sizing tools novel vehicle configurations and sizes that are being proposed for urban air taxi and package delivery type applications.

  12. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.

    1984-01-01

    The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.

  13. On-line, continuous monitoring in solar cell and fuel cell manufacturing using spectral reflectance imaging

    DOEpatents

    Sopori, Bhushan; Rupnowski, Przemyslaw; Ulsh, Michael

    2016-01-12

    A monitoring system 100 comprising a material transport system 104 providing for the transportation of a substantially planar material 102, 107 through the monitoring zone 103 of the monitoring system 100. The system 100 also includes a line camera 106 positioned to obtain multiple line images across a width of the material 102, 107 as it is transported through the monitoring zone 103. The system 100 further includes an illumination source 108 providing for the illumination of the material 102, 107 transported through the monitoring zone 103 such that light reflected in a direction normal to the substantially planar surface of the material 102, 107 is detected by the line camera 106. A data processing system 110 is also provided in digital communication with the line camera 106. The data processing system 110 is configured to receive data output from the line camera 106 and further configured to calculate and provide substantially contemporaneous information relating to a quality parameter of the material 102, 107. Also disclosed are methods of monitoring a quality parameter of a material.

  14. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Status of Workers.

    PubMed

    ZoroufchiBenis, Khaled; Fatehifar, Esmaeil; Ahmadi, Javad; Rouhi, Alireza

    2015-01-01

    Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availability of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN) for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. A multi-pollutant method (implemented as a MATLAB program) was explored for configur-ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids) according to their ability to represent the ambient concentration. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta-tion's dosage to the total dosage in the network. Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network efficiency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.

  15. Tools for automated acoustic monitoring within the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    The R package monitoR contains tools for managing an acoustic-monitoring program including survey metadata, template creation and manipulation, automated detection and results management. These tools are scalable for use with small projects as well as larger long-term projects and those with expansive spatial extents. Here, we describe typical workflow when using the tools in monitoR. Typical workflow utilizes a generic sequence of functions, with the option for either binary point matching or spectrogram cross-correlation detectors.

  16. Ethylene monitoring and control system

    NASA Technical Reports Server (NTRS)

    Nelson, Bruce N. (Inventor); Kanc, James A. (Inventor); Richard, II, Roy V. (Inventor)

    2000-01-01

    A system that can accurately monitor and control low concentrations of ethylene gas includes a test chamber configured to receive sample gas potentially containing an ethylene concentration and ozone, a detector configured to receive light produced during a reaction between the ethylene and ozone and to produce signals related thereto, and a computer connected to the detector to process the signals to determine therefrom a value of the concentration of ethylene in the sample gas. The supply for the system can include a four way valve configured to receive pressurized gas at one input and a test chamber. A piston is journaled in the test chamber with a drive end disposed in a drive chamber and a reaction end defining with walls of the test chamber a variable volume reaction chamber. The drive end of the piston is pneumatically connected to two ports of the four way valve to provide motive force to the piston. A manifold is connected to the variable volume reaction chamber, and is configured to receive sample gasses from at least one of a plurality of ports connectable to degreening rooms and to supply the sample gas to the reactive chamber for reaction with ozone. The apparatus can be used to monitor and control the ethylene concentration in multiple degreening rooms.

  17. Ethylene monitoring and control system

    NASA Technical Reports Server (NTRS)

    Nelson, Bruce N. (Inventor); Kane, James A. (Inventor); Richard, II, Roy V. (Inventor)

    2001-01-01

    A system that can accurately monitor and control low concentrations of ethylene gas includes a test chamber configured to receive sample gas potentially containing an ethylene concentration and ozone, a detector configured to receive light produced during a reaction between the ethylene and ozone and to produce signals related thereto, and a computer connected to the detector to process the signals to determine therefrom a value of the concentration of ethylene in the sample gas. The supply for the system can include a four way valve configured to receive pressurized gas at one input and a test chamber. A piston is journaled in the test chamber with a drive end disposed in a drive chamber and a reaction end defining with walls of the test chamber a variable volume reaction chamber. The drive end of the piston is pneumatically connected to two ports of the four way valve to provide motive force to the piston. A manifold is connected to the variable volume reaction chamber, and is configured to receive sample gasses from at least one of a plurality of ports connectable to degreening rooms and to supply the sample gas to the reactive chamber for reaction with ozone. The apparatus can be used to monitor and control the ethylene concentration in multiple degreening rooms.

  18. Development of an ultra-compact mid-infrared attenuated total reflectance spectrophotometer

    NASA Astrophysics Data System (ADS)

    Kim, Dong Soo; Lee, Tae-Ro; Yoon, Gilwon

    2014-07-01

    Mid-infrared spectroscopy has been an important tool widely used for qualitative analysis in various fields. However, portable or personal use is size and cost prohibitive for either Fourier transform infrared or attenuated total reflectance (ATR) spectrophotometers. In this study, we developed an ultra-compact ATR spectrophotometer whose frequency band was 5.5-11.0 μm. We used miniature components, such as a light source fabricated by semiconductor technology, a linear variable filter, and a pyro-electric array detector. There were no moving parts. Optimal design based on two light sources, a zippered configuration of the array detector and ATR optics could produce absorption spectra that might be used for qualitative analysis. A microprocessor synchronized the pulsed light sources and detector, and all the signals were processed digitally. The size was 13.5×8.5×3.5 cm3 and the weight was 300 grams. Due to its low cost, our spectrophotometer can replace many online monitoring devices. Another application could be for a u-healthcare system installed in the bathroom or attached to a smartphone for monitoring substances in body fluids.

  19. Experience with ATLAS MySQL PanDA database service

    NASA Astrophysics Data System (ADS)

    Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  20. Optimization and Characterization of the Friction Stir Welded Sheets of AA 5754-H111: Monitoring of the Quality of Joints with Thermographic Techniques

    PubMed Central

    De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Galietti, Umberto

    2017-01-01

    Friction Stir Welding (FSW) is a solid-state welding process, based on frictional and stirring phenomena, that offers many advantages with respect to the traditional welding methods. However, several parameters can affect the quality of the produced joints. In this work, an experimental approach has been used for studying and optimizing the FSW process, applied on 5754-H111 aluminum plates. In particular, the thermal behavior of the material during the process has been investigated and two thermal indexes, the maximum temperature and the heating rate of the material, correlated to the frictional power input, were investigated for different process parameters (the travel and rotation tool speeds) configurations. Moreover, other techniques (micrographs, macrographs and destructive tensile tests) were carried out for supporting in a quantitative way the analysis of the quality of welded joints. The potential of thermographic technique has been demonstrated both for monitoring the FSW process and for predicting the quality of joints in terms of tensile strength. PMID:29019948

  1. Network-based real-time radiation monitoring system in Synchrotron Radiation Research Center.

    PubMed

    Sheu, R J; Wang, J P; Chen, C R; Liu, J; Chang, F D; Jiang, S H

    2003-10-01

    The real-time radiation monitoring system (RMS) in the Synchrotron Radiation Research Center (SRRC) has been upgraded significantly during the past years. The new framework of the RMS is built on the popular network technology, including Ethernet hardware connections and Web-based software interfaces. It features virtually no distance limitations, flexible and scalable equipment connections, faster response time, remote diagnosis, easy maintenance, as well as many graphic user interface software tools. This paper briefly describes the radiation environment in SRRC and presents the system configuration, basic functions, and some operational results of this real-time RMS. Besides the control of radiation exposures, it has been demonstrated that a variety of valuable information or correlations could be extracted from the measured radiation levels delivered by the RMS, including the changes of operating conditions, beam loss pattern, radiation skyshine, and so on. The real-time RMS can be conveniently accessed either using the dedicated client program or World Wide Web interface. The address of the Web site is http:// www-rms.srrc.gov.tw.

  2. Managing a Real-Time Embedded Linux Platform with Buildroot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diamond, J.; Martin, K.

    2015-01-01

    Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilabmore » accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations« less

  3. Air traffic management evaluation tool

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar (Inventor); Chatterji, Gano Broto (Inventor); Schipper, John F. (Inventor); Bilimoria, Karl D. (Inventor); Grabbe, Shon (Inventor); Sheth, Kapil S. (Inventor)

    2012-01-01

    Methods for evaluating and implementing air traffic management tools and approaches for managing and avoiding an air traffic incident before the incident occurs. A first system receives parameters for flight plan configurations (e.g., initial fuel carried, flight route, flight route segments followed, flight altitude for a given flight route segment, aircraft velocity for each flight route segment, flight route ascent rate, flight route descent route, flight departure site, flight departure time, flight arrival time, flight destination site and/or alternate flight destination site), flight plan schedule, expected weather along each flight route segment, aircraft specifics, airspace (altitude) bounds for each flight route segment, navigational aids available. The invention provides flight plan routing and direct routing or wind optimal routing, using great circle navigation and spherical Earth geometry. The invention provides for aircraft dynamics effects, such as wind effects at each altitude, altitude changes, airspeed changes and aircraft turns to provide predictions of aircraft trajectory (and, optionally, aircraft fuel use). A second system provides several aviation applications using the first system. Several classes of potential incidents are analyzed and averted, by appropriate change en route of one or more parameters in the flight plan configuration, as provided by a conflict detection and resolution module and/or traffic flow management modules. These applications include conflict detection and resolution, miles-in trail or minutes-in-trail aircraft separation, flight arrival management, flight re-routing, weather prediction and analysis and interpolation of weather variables based upon sparse measurements. The invention combines these features to provide an aircraft monitoring system and an aircraft user system that interact and negotiate changes with each other.

  4. DPM: Future Proof Storage

    NASA Astrophysics Data System (ADS)

    Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo

    2012-12-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.

  5. A printed, dry electrode Frank configuration vest for ambulatory vectorcardiographic monitoring

    NASA Astrophysics Data System (ADS)

    Paul, Gordon; Torah, Russel; Beeby, Steve; Tudor, John

    2017-02-01

    This paper describes the design and fabrication of a screen printed network of bio-potential measurement electrodes on a garment, in this case a vest. The electrodes are placed according to the Frank configuration, which allows monitoring of the electrical behavior of the heart in three spatial orientations. The vest is designed to provide stable contact pressure on the electrodes. The electrodes are fabricated from stencil printed carbon loaded rubber and are connected by screen printed silver polymer conductive tracks to an array of vias, which form an electrical connection to the other side of the textile. The vest is tested and compared to Frank configuration recordings that were obtained using standard self-adhesive ECG electrodes. The vest was successfully used to obtain Frank configuration recordings with minimal baseline drift. The vest is fabricated using only technologies found in standard textile production lines and can be used with a reduced setup effort compared to clinical 12-lead examinations.

  6. Transition Flight Control Room Automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which we feel can have a direct impact on ground operations automation. These projects include: The Fuel Cell Monitoring System (FCMS), which will monitor and detect problems with the fuel cells on the Shuttle. FCMS will use a combination of rules (forward/backward) and multi-threaded procedures which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. A graphical computation language (AGCOMPL). AGCOMPL is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on Shuttle or Space Station telemetry and trajectory data. The design of a system which will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. A generic message management (GMM) system. GMM is being designed as a message management system for real-time applications which send advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occurs and in assisting the developer in devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, while determining the feasibility of a given approach, including identification of appropriate software tools to support research, application and tool building activities.

  7. Transition flight control room automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which can have a direct impact on ground operations automation. These projects include: (1) The fuel cell monitoring system (FCMS), which will monitor and detect problems with the fuel cells on the shuttle. FCMS will use a combination of rules (forward/backward) and multithreaded procedures, which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule-based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. (2) A graphical computation language (AGCOMPL) is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on shuttle or space station telemetry and trajectory data. (3) The design of a system will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. (4) A generic message management (GMM) system is being designed for real-time applications as a message management system which sends advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occur and to assist the developer in the devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, including identification of appropriate software tools to support research, application, and tool building activities, while determining the feasibility of a given approach.

  8. Wireless device monitoring systems and monitoring devices, and associated methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCown, Steven H; Derr, Kurt W; Rohde, Kenneth W

    Wireless device monitoring systems and monitoring devices include a communications module for receiving wireless communications of a wireless device. Processing circuitry is coupled with the communications module and configured to process the wireless communications to determine whether the wireless device is authorized or unauthorized to be present at the monitored area based on identification information of the wireless device. Methods of monitoring for the presence and identity of wireless devices are also provided.

  9. Configuration management issues and objectives for a real-time research flight test support facility

    NASA Technical Reports Server (NTRS)

    Yergensen, Stephen; Rhea, Donald C.

    1988-01-01

    An account is given of configuration management activities for the Western Aeronautical Test Range (WATR) at NASA-Ames, whose primary function is the conduct of aeronautical research flight testing through real-time processing and display, tracking, and communications systems. The processing of WATR configuration change requests for specific research flight test projects must be conducted in such a way as to refrain from compromising the reliability of WATR support to all project users. Configuration management's scope ranges from mission planning to operations monitoring and performance trend analysis.

  10. Computer software configuration description, 241-AY and 241-AZ tank farm MICON automation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelman, W.D.

    This document describes the configuration process, choices and conventions used during the configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 2 incorporates minor changes to ensure the document setpoints accurately reflect limits (including exhaust stack flow of 800 scfm) established in OSD-T-151-00019. The MICON DCS software controls and monitors the instrumentation and equipment associated with plant systems and processes.

  11. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  12. Experiences with the Twitter Health Surveillance (THS) System

    PubMed Central

    Rodríguez-Martínez, Manuel

    2018-01-01

    Social media has become an important platform to gauge public opinion on topics related to our daily lives. In practice, processing these posts requires big data analytics tools since the volume of data and the speed of production overwhelm single-server solutions. Building an application to capture and analyze posts from social media can be a challenge simply because it requires combining a set of complex software tools that often times are tricky to configure, tune, and maintain. In many instances, the application ends up being an assorted collection of Java/Scala programs or Python scripts that developers cobble together to generate the data products they need. In this paper, we present the Twitter Health Surveillance (THS) application framework. THS is designed as a platform to allow end-users to monitor a stream of tweets, and process the stream with a combination of built-in functionality and their own user-defined functions. We discuss the architecture of THS, and describe its implementation atop the Apache Hadoop Ecosystem. We also present several lessons learned while developing our current prototype. PMID:29607412

  13. Experiences with the Twitter Health Surveillance (THS) System.

    PubMed

    Rodríguez-Martínez, Manuel

    2017-06-01

    Social media has become an important platform to gauge public opinion on topics related to our daily lives. In practice, processing these posts requires big data analytics tools since the volume of data and the speed of production overwhelm single-server solutions. Building an application to capture and analyze posts from social media can be a challenge simply because it requires combining a set of complex software tools that often times are tricky to configure, tune, and maintain. In many instances, the application ends up being an assorted collection of Java/Scala programs or Python scripts that developers cobble together to generate the data products they need. In this paper, we present the Twitter Health Surveillance (THS) application framework. THS is designed as a platform to allow end-users to monitor a stream of tweets, and process the stream with a combination of built-in functionality and their own user-defined functions. We discuss the architecture of THS, and describe its implementation atop the Apache Hadoop Ecosystem. We also present several lessons learned while developing our current prototype.

  14. The Social Construction of Ability in Movement Assessment Tools

    ERIC Educational Resources Information Center

    Tidén, Anna; Redelius, Karin; Lundvall, Suzanne

    2017-01-01

    This paper focuses on how "ability" is conceptualised, configured and produced in movement assessment tools. The aim of the study was to critically analyse assessment tools used for healthy and typically developed children. The sample consists of 10 tools from 6 different countries. In the study, we pay special attention to content and…

  15. Assessment Tools' Indicators for Sustainability in Universities: An Analytical Overview

    ERIC Educational Resources Information Center

    Alghamdi, Naif; den Heijer, Alexandra; de Jonge, Hans

    2017-01-01

    Purpose: The purpose of this paper is to analyse 12 assessment tools of sustainability in universities and develop the structure and the contents of these tools to be more intelligible. The configuration of the tools reviewed highlight indicators that clearly communicate only the essential information. This paper explores how the theoretical…

  16. Securing your Site in Development and Beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akopov, Mikhail S.

    Why wait until production deployment, or even staging and testing deployment to identify security vulnerabilities? Using tools like Burp Suite, you can find security vulnerabilities before they creep up on you. Prevent cross-site scripting attacks, and establish a firmer trust between your website and your client. Verify that Apache/Nginx have the correct SSL Ciphers set. We explore using these tools and more to validate proper Apache/Nginx configurations, and to be compliant with modern configuration standards as part of the development cycle. Your clients can use tools like https://securityheaders.io and https://ssllabs.com to get a graded report on your level of compliancemore » with OWASP Secure Headers Project and SSLLabs recommendations. Likewise, you should always use the same sites to validate your configurations. Burp Suite will find common misconfigurations and will also perform more thorough security testing of your applications. In this session you will see examples of vulnerabilities that were detected early on, as well has how to integrate these practices into your daily workflow.« less

  17. Automated Kinematics Equations Generation and Constrained Motion Planning Resolution for Modular and Reconfigurable Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, Francois G.; Love, Lonnie L.; Jung, David L.

    2004-03-29

    Contrary to the repetitive tasks performed by industrial robots, the tasks in most DOE missions such as environmental restoration or Decontamination and Decommissioning (D&D) can be characterized as ''batches-of-one'', in which robots must be capable of adapting to changes in constraints, tools, environment, criteria and configuration. No commercially available robot control code is suitable for use with such widely varying conditions. In this talk we present our development of a ''generic code'' to allow real time (at loop rate) robot behavior adaptation to changes in task objectives, tools, number and type of constraints, modes of controls or kinematics configuration. Wemore » present the analytical framework underlying our approach and detail the design of its two major modules for the automatic generation of the kinematics equations when the robot configuration or tools change and for the motion planning under time-varying constraints. Sample problems illustrating the capabilities of the developed system are presented.« less

  18. Module Configuration

    DOEpatents

    Oweis, Salah; D'Ussel, Louis; Chagnon, Guy; Zuhowski, Michael; Sack, Tim; Laucournet, Gaullume; Jackson, Edward J.

    2002-06-04

    A stand alone battery module including: (a) a mechanical configuration; (b) a thermal management configuration; (c) an electrical connection configuration; and (d) an electronics configuration. Such a module is fully interchangeable in a battery pack assembly, mechanically, from the thermal management point of view, and electrically. With the same hardware, the module can accommodate different cell sizes and, therefore, can easily have different capacities. The module structure is designed to accommodate the electronics monitoring, protection, and printed wiring assembly boards (PWAs), as well as to allow airflow through the module. A plurality of modules may easily be connected together to form a battery pack. The parts of the module are designed to facilitate their manufacture and assembly.

  19. Optical Sensing using Fiber Bragg Gratings for Monitoring Structural Damage in Composite Over-Wrapped Vessels

    NASA Technical Reports Server (NTRS)

    Grant, Joseph

    2005-01-01

    Composite Over-Wrap Vessels are widely used in the aerospace community. They are made of thin-walled bottles that are over wrapped with high strength fibers embedded in a matrix material. There is a strong drive to reduce the weight of space borne vehicles and thus pushes designers to adopt COPVs that are over wrapped with graphite fibers embedded in its epoxy matrix. Unfortunately, this same fiber-matrix configuration is more susceptible to impact damage than others and to make matters worse; there is a regime where impacts that damage the over wrap leave no visible scar on the COPV surface. In this paper FBG sensors are presented as a means of monitoring and detecting these types of damage. The FBG sensors are surface mounted to the COPVs and optically interrogated to explore the structural properties of these composite pressure vessels. These gratings optically inscribed into the core of a single mode fiber are used as a tool to monitor the stress strain relation in the composite matrix. The response of these fiber-optic sensors is investigated by pressurizing the cylinder up to its burst pressure of around 4500 psi. A Fiber Optic Demodulation System built by Blue Road Research, is used for interrogation of the Bragg gratings.

  20. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  1. Fluorescence diffuse tomography for detection of RFP-expressed tumors in small animals

    NASA Astrophysics Data System (ADS)

    Turchin, Ilya V.; Savitsky, Alexander P.; Kamensky, Vladislav A.; Plehanov, Vladimir I.; Meerovich, Irina G.; Arslanbaeva, Lyaisan R.; Jerdeva, Viktoria V.; Orlova, Anna G.; Kleshnin, Mikhail S.; Shirmanova, Marina V.; Fiks, Ilya I.

    2007-02-01

    Conventional optical imaging is restricted with tumor size due to high tissue scattering. Labeling of tumors by fluorescent markers improves sensitivity of tumor detection thus increasing the value of optical imaging dramatically. Creation of tumor cell lines transfected with fluorescent proteins gives the possibility not only to detect tumor, but also to conduct the intravital monitoring studies. Cell lines of human melanomas Mel-P, Mel-Kor and human embryonic kidney HEK-293 Phoenix were transfected with DsRed-Express and TurboRFP genes. Emission of RFP in the long-wave optical range permits detection of the deeply located tumors, which is essential for whole-body imaging. Only special tools for turbid media imaging, such as fluorescent diffusion tomography (FDT), enable noninvasive investigation of the internal structure of biological tissue. FDT setup for monitoring of tumor growth in small animals has been created. An animal is scanned in the transilluminative configuration by low-frequency modulated light (1 kHz) from Nd:YAG laser with second harmonic generation at the 532 nm wavelength. In vivo experiments were conducted immediately after the subcutaneously injection of fluorescing cells into small animals. It was shown that FDT method allows to detect the presence of fluorescent cells in small animals and can be used for monitoring of tumor growth and anticancer drug responce.

  2. Fluorescence diffuse tomography for detection of RFP-expressed tumors in small animals

    NASA Astrophysics Data System (ADS)

    Turchin, Ilya V.; Savitsky, Alexander P.; Kamensky, Vladislav A.; Plehanov, Vladimir I.; Orlova, Anna G.; Kleshnin, Mikhail S.; Shirmanova, Marina V.; Fix, Ilya I.; Popov, Vladimir O.

    2007-07-01

    Capabilities of tumor detection by different optical methods can be significantly improved by labeling of tumors with fluorescent markers. Creation of tumor cell lines transfected with fluorescent proteins provides the possibility not only to detect tumor, but also to conduct the intravital monitoring studies. Cell lines of human melanomas Mel-P, Mel-Kor and human embryonic kidney HEK-293 Phoenix were transfected with DsRed-Express and Turbo-RFP genes. Emission of RFP in the long-wave optical range permits detection of the deeply located tumors, which is essential for whole-body imaging. Only special tools for turbid media imaging, such as fluorescent diffusion tomography (FDT), enable noninvasive investigation of the internal structure of biological tissue. FDT setup for monitoring of tumor growth in small animals has been created. An animal is scanned in the transilluminative configuration by low-frequency modulated light (1 kHz) from Nd:YAG laser with second harmonic generation at the 532 nm wavelength. An optimizing algorithm for scanning of an experimantal animal is suggested. In vivo experiments were conducted immediately after the subcutaneously injection of fluorescing cells into small animals. It was shown that FDT method allows to detect the presence of fluorescent cells in small animals and can be used for monitoring of tumor growth and anticancer drug responce.

  3. A PDA-based Network for Telemonitoring Asthma Triggering Gases in the El Paso School Districts of the US - Mexico Border Region.

    PubMed

    Shenoy, Namdev; Nazeran, Homer

    2005-01-01

    In this paper we describe the application of a personal digital assistant (PDA) or pocket PC as an effective communication device to telemonitor levels of asthma triggering gases collected from a remote location under test to a workstation which has a personal computer (PC) running on Windows XP® as the operating system. The Bluetooth® features of the PDA are explored to transmit data collected by a Direct™ Sense Tox toxic gas monitor equipped with five toxic gas probes and one temperature sensor in real time, thereby making this telemonitoring system an innovative instrument in monitoring levels of asthma triggering gases in the El Paso-border metropolitan region, a region in which asthma is highly prevalent especially in children. At the workstation or fixed location these readings are displayed using a custom made, user friendly graphical user interface (GUI) developed using software tools like action scripting with Macromedia® Flash™. The growing advancement in technology and ever diminishing sizes of handheld devices encouraged us to opt for this configuration. Moreover, the PDA and toxic gas monitor were also chosen for their light weight, portability, flexibility, low cost and data collection and transmission capabilities.

  4. Advanced Nuclear Technology. Using Technology for Small Modular Reactor Staff Optimization, Improved Effectiveness, and Cost Containment, 3002007071

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loflin, Leonard

    Through this grant, the U.S. Department of Energy (DOE) will review several functional areas within a nuclear power plant, including fire protection, operations and operations support, refueling, training, procurement, maintenance, site engineering, and others. Several functional areas need to be examined since there appears to be no single staffing area or approach that alone has the potential for significant staff optimization at new nuclear power plants. Several of the functional areas will require a review of technology options such as automation, remote monitoring, fleet wide monitoring, new and specialized instrumentation, human factors engineering, risk informed analysis and PRAs, component andmore » system condition monitoring and reporting, just in time training, electronic and automated procedures, electronic tools for configuration management and license and design basis information, etc., that may be applied to support optimization. Additionally, the project will require a review key regulatory issues that affect staffing and could be optimized with additional technology input. Opportunities to further optimize staffing levels and staffing functions by selection of design attributes of physical systems and structures need also be identified. A goal of this project is to develop a prioritized assessment of the functional areas, and R&D actions needed for those functional areas, to provide the best optimization« less

  5. Process tool monitoring and matching using interferometry technique

    NASA Astrophysics Data System (ADS)

    Anberg, Doug; Owen, David M.; Mileham, Jeffrey; Lee, Byoung-Ho; Bouche, Eric

    2016-03-01

    The semiconductor industry makes dramatic device technology changes over short time periods. As the semiconductor industry advances towards to the 10 nm device node, more precise management and control of processing tools has become a significant manufacturing challenge. Some processes require multiple tool sets and some tools have multiple chambers for mass production. Tool and chamber matching has become a critical consideration for meeting today's manufacturing requirements. Additionally, process tools and chamber conditions have to be monitored to ensure uniform process performance across the tool and chamber fleet. There are many parameters for managing and monitoring tools and chambers. Particle defect monitoring is a well-known and established example where defect inspection tools can directly detect particles on the wafer surface. However, leading edge processes are driving the need to also monitor invisible defects, i.e. stress, contamination, etc., because some device failures cannot be directly correlated with traditional visualized defect maps or other known sources. Some failure maps show the same signatures as stress or contamination maps, which implies correlation to device performance or yield. In this paper we present process tool monitoring and matching using an interferometry technique. There are many types of interferometry techniques used for various process monitoring applications. We use a Coherent Gradient Sensing (CGS) interferometer which is self-referencing and enables high throughput measurements. Using this technique, we can quickly measure the topography of an entire wafer surface and obtain stress and displacement data from the topography measurement. For improved tool and chamber matching and reduced device failure, wafer stress measurements can be implemented as a regular tool or chamber monitoring test for either unpatterned or patterned wafers as a good criteria for improved process stability.

  6. A new data logger for integrated geophysical monitoring

    NASA Astrophysics Data System (ADS)

    Orazi, Massimo; Peluso, Rosario; Caputo, Antonio; Giudicepietro, Flora; Martini, Marcello

    2015-04-01

    GILDA digital recorder is a data logger developed at Osservatorio Vesuviano (INGV). It provides excellent data quality with low power consumption and low production cost. It is widely used in the multi-parametric monitoring networks of Neapolitan volcanoes and Stromboli volcano. We have improved the characteristics of GILDA recorder to realize a robust user-oriented acquisition system for integrated geophysical monitoring. We have designed and implemented new capabilities concerning the use of the low rate channels to get data of environmental parameters of the station. We also improved the stand-alone version of the data logger. This version can be particularly useful for scientific experiments and to rapidly upgrade permanent monitoring networks. Furthermore, the local storage can be used as back-up for the monitoring systems in continuous transmission, in case of failure of the transmission system. Some firmware changes have been made in order to improve the performance of the instrument. In particular, the low rate acquisition channels were conditioned to acquire internal parameters of the recorder such as the temperature and voltage. A prototype of the new version of the logger is currently installed at Campi Flegrei for a experimental application. Our experiment is aimed at testing the new version of GILDA data logger in multi-board configuration for multiparametric acquisitions. A second objective of the experiment is the comparison of the recorded data with geochemical data acquired by a multiparametric geochemical station to investigate possible correlations between seismic and geochemical parameters. The target site of the experiment is "Bocca Grande" fumarole in Solfatara volcano. By exploiting the modularity of GILDA, for the experiment has been realized an acquisition system based on three dataloggers for a total of 12 available channels. One of GILDA recorders is the Master and the other two are Slaves. The Master is responsible for the initial configuration of the GPS receiver for timing data. The two data loggers configured in slave mode await the end of the initial configuration and then receive the GPS timing data and PPS from the Master. This allows you to use one GPS receiver and optimize power consumption. The whole system is configured to continuously transmit data via WiFi and to locally store data.

  7. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    NASA Astrophysics Data System (ADS)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  8. What would dense atmospheric observation networks bring to the quantification of city CO2 emissions?

    NASA Astrophysics Data System (ADS)

    Wu, Lin; Broquet, Grégoire; Ciais, Philippe; Bellassen, Valentin; Vogel, Felix; Chevallier, Frédéric; Xueref-Remy, Irène; Wang, Yilong

    2016-06-01

    Cities currently covering only a very small portion ( < 3 %) of the world's land surface directly release to the atmosphere about 44 % of global energy-related CO2, but they are associated with 71-76 % of CO2 emissions from global final energy use. Although many cities have set voluntary climate plans, their CO2 emissions are not evaluated by the monitoring, reporting, and verification (MRV) procedures that play a key role for market- or policy-based mitigation actions. Here we analyze the potential of a monitoring tool that could support the development of such procedures at the city scale. It is based on an atmospheric inversion method that exploits inventory data and continuous atmospheric CO2 concentration measurements from a network of stations within and around cities to estimate city CO2 emissions. This monitoring tool is configured for the quantification of the total and sectoral CO2 emissions in the Paris metropolitan area (˜ 12 million inhabitants and 11.4 TgC emitted in 2010) during the month of January 2011. Its performances are evaluated in terms of uncertainty reduction based on observing system simulation experiments (OSSEs). They are analyzed as a function of the number of sampling sites (measuring at 25 m a.g.l.) and as a function of the network design. The instruments presently used to measure CO2 concentrations at research stations are expensive (typically ˜ EUR 50 k per sensor), which has limited the few current pilot city networks to around 10 sites. Larger theoretical networks are studied here to assess the potential benefit of hypothetical operational lower-cost sensors. The setup of our inversion system is based on a number of diagnostics and assumptions from previous city-scale inversion experiences with real data. We find that, given our assumptions underlying the configuration of the OSSEs, with 10 stations only the uncertainty for the total city CO2 emission during 1 month is significantly reduced by the inversion by ˜ 42 %. It can be further reduced by extending the network, e.g., from 10 to 70 stations, which is promising for MRV applications in the Paris metropolitan area. With 70 stations, the uncertainties in the inverted emissions are reduced significantly over those obtained using 10 stations: by 32 % for commercial and residential buildings, by 33 % for road transport, by 18 % for the production of energy by power plants, and by 31 % for total emissions. These results indicate that such a high number of stations would be likely required for the monitoring of sectoral emissions in Paris using this observation-model framework. They demonstrate some high potential that atmospheric inversions can contribute to the monitoring and/or the verification of city CO2 emissions (baseline) and CO2 emission reductions (commitments) and the advantage that could be brought by the current developments of lower-cost medium precision (LCMP) sensors.

  9. FFI: A software tool for ecological monitoring

    Treesearch

    Duncan C. Lutes; Nathan C. Benson; MaryBeth Keifer; John F. Caratti; S. Austin Streetman

    2009-01-01

    A new monitoring tool called FFI (FEAT/FIREMON Integrated) has been developed to assist managers with collection, storage and analysis of ecological information. The tool was developed through the complementary integration of two fire effects monitoring systems commonly used in the United States: FIREMON and the Fire Ecology Assessment Tool. FFI provides software...

  10. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks.

    PubMed

    Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi

    2016-01-01

    Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance.

  11. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks

    PubMed Central

    Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi

    2016-01-01

    Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance. PMID:27006977

  12. A Wireless Sensor Network-Based Approach with Decision Support for Monitoring Lake Water Quality.

    PubMed

    Huang, Xiaoci; Yi, Jianjun; Chen, Shaoli; Zhu, Xiaomin

    2015-11-19

    Online monitoring and water quality analysis of lakes are urgently needed. A feasible and effective approach is to use a Wireless Sensor Network (WSN). Lake water environments, like other real world environments, present many changing and unpredictable situations. To ensure flexibility in such an environment, the WSN node has to be prepared to deal with varying situations. This paper presents a WSN self-configuration approach for lake water quality monitoring. The approach is based on the integration of a semantic framework, where a reasoner can make decisions on the configuration of WSN services. We present a WSN ontology and the relevant water quality monitoring context information, which considers its suitability in a pervasive computing environment. We also propose a rule-based reasoning engine that is used to conduct decision support through reasoning techniques and context-awareness. To evaluate the approach, we conduct usability experiments and performance benchmarks.

  13. An Oil/Water disperser device for use in an oil content Monitor/Control system

    NASA Astrophysics Data System (ADS)

    Kempel, F. D.

    1985-07-01

    This patent application discloses an oil content monitor/control unit system, including an oil/water disperser device, which is configured to automatically monitor and control processed effluent from an associated oil/water separator so that if the processed effluent exceeds predetermine in-port or at-sea oil concentration lmits, it is either recirculated to an associated oil/water separator via a ship's bilge for additional processing, or diverted to a holding tank for storage. On the other hand, if the oil concentration of the processed effluent is less than predetermine in-port or at-sea limits, it is discharged overboard. The oil/water disperser device is configured to break up any oil present in the processed effluent into uniform droplets for more accurate sensing of the oil present in the processed effluent into uniform droplets for more accurate sensing of the oil-in-water concentration level thereof. The oil/water disperser device has a flow-actuated variable orifice configured into a spring-loaded polyethylene plunger which provides the uniform distribution of oil droplets.

  14. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  15. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  16. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    NASA Astrophysics Data System (ADS)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  17. Configuration Management Plan for the Tank Farm Contractor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WEIR, W.R.

    The Configuration Management Plan for the Tank Farm Contractor describes configuration management the contractor uses to manage and integrate its technical baseline with the programmatic and functional operations to perform work. The Configuration Management Plan for the Tank Farm Contractor supports the management of the project baseline by providing the mechanisms to identify, document, and control the technical characteristics of the products, processes, and structures, systems, and components (SSC). This plan is one of the tools used to identify and provide controls for the technical baseline of the Tank Farm Contractor (TFC). The configuration management plan is listed in themore » management process documents for TFC as depicted in Attachment 1, TFC Document Structure. The configuration management plan is an integrated approach for control of technical, schedule, cost, and administrative processes necessary to manage the mission of the TFC. Configuration management encompasses the five functional elements of: (1) configuration management administration, (2) configuration identification, (3) configuration status accounting, (4) change control, and (5 ) configuration management assessments.« less

  18. Ground System Architectures Workshop GMSEC SERVICES SUITE (GSS): an Agile Development Story

    NASA Technical Reports Server (NTRS)

    Ly, Vuong

    2017-01-01

    The GMSEC (Goddard Mission Services Evolution Center) Services Suite (GSS) is a collection of tools and software services along with a robust customizable web-based portal that enables the user to capture, monitor, report, and analyze system-wide GMSEC data. Given our plug-and-play architecture and the needs for rapid system development, we opted to follow the Scrum Agile Methodology for software development. Being one of the first few projects to implement the Agile methodology at NASA GSFC, in this presentation we will present our approaches, tools, successes, and challenges in implementing this methodology. The GMSEC architecture provides a scalable, extensible ground and flight system for existing and future missions. GMSEC comes with a robust Application Programming Interface (GMSEC API) and a core set of Java-based GMSEC components that facilitate the development of a GMSEC-based ground system. Over the past few years, we have seen an upbeat in the number of customers who are moving from a native desktop application environment to a web based environment particularly for data monitoring and analysis. We also see a need to provide separation of the business logic from the GUI display for our Java-based components and also to consolidate all the GUI displays into one interface. This combination of separation and consolidation brings immediate value to a GMSEC-based ground system through increased ease of data access via a uniform interface, built-in security measures, centralized configuration management, and ease of feature extensibility.

  19. Using NetMeeting for remote configuration of the Otto Bock C-Leg: technical considerations.

    PubMed

    Lemaire, E D; Fawcett, J A

    2002-08-01

    Telehealth has the potential to be a valuable tool for technical and clinical support of computer controlled prosthetic devices. This pilot study examined the use of Internet-based, desktop video conferencing for remote configuration of the Otto Bock C-Leg. Laboratory tests involved connecting two computers running Microsoft NetMeeting over a local area network (IP protocol). Over 56 Kbs(-1), DSL/Cable, and 10 Mbs(-1) LAN speeds, a prosthetist remotely configured a user's C-Leg by using Application Sharing, Live Video, and Live Audio. A similar test between sites in Ottawa and Toronto, Canada was limited by the notebook computer's 28 Kbs(-1) modem. At the 28 Kbs(-1) Internet-connection speed, NetMeeting's application sharing feature was not able to update the remote Sliders window fast enough to display peak toe loads and peak knee angles. These results support the use of NetMeeting as an accessible and cost-effective tool for remote C-Leg configuration, provided that sufficient Internet data transfer speed is available.

  20. Supporting performance and configuration management of GTE cellular networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Ming; Lafond, C.; Jakobson, G.

    GTE Laboratories, in cooperation with GTE Mobilnet, has developed and deployed PERFFEX (PERFormance Expert), an intelligent system for performance and configuration management of cellular networks. PERFEX assists cellular network performance and radio engineers in the analysis of large volumes of cellular network performance and configuration data. It helps them locate and determine the probable causes of performance problems, and provides intelligent suggestions about how to correct them. The system combines an expert cellular network performance tuning capability with a map-based graphical user interface, data visualization programs, and a set of special cellular engineering tools. PERFEX is in daily use atmore » more than 25 GTE Mobile Switching Centers. Since the first deployment of the system in late 1993, PERFEX has become a major GTE cellular network performance optimization tool.« less

  1. MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling.

    PubMed

    Piro, Vitor C; Matschkowski, Marcel; Renard, Bernhard Y

    2017-08-14

    Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics .

  2. Measured energy savings and performance of power-managed personal computers and monitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less

  3. Milliwave melter monitoring system

    DOEpatents

    Daniel, William E [North Augusta, SC; Woskov, Paul P [Bedford, MA; Sundaram, Shanmugavelayutham K [Richland, WA

    2011-08-16

    A milliwave melter monitoring system is presented that has a waveguide with a portion capable of contacting a molten material in a melter for use in measuring one or more properties of the molten material in a furnace under extreme environments. A receiver is configured for use in obtaining signals from the melt/material transmitted to appropriate electronics through the waveguide. The receiver is configured for receiving signals from the waveguide when contacting the molten material for use in determining the viscosity of the molten material. Other embodiments exist in which the temperature, emissivity, viscosity and other properties of the molten material are measured.

  4. Reprogrammable field programmable gate array with integrated system for mitigating effects of single event upsets

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2010-01-01

    An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.

  5. 17 CFR 49.17 - Access to SDR data.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... legal and statutory responsibilities under the Act and related regulations. (2) Monitoring tools. A registered swap data repository is required to provide the Commission with proper tools for the monitoring... data structure and content. These monitoring tools shall be substantially similar in analytical...

  6. 17 CFR 49.17 - Access to SDR data.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... legal and statutory responsibilities under the Act and related regulations. (2) Monitoring tools. A registered swap data repository is required to provide the Commission with proper tools for the monitoring... data structure and content. These monitoring tools shall be substantially similar in analytical...

  7. Performance monitoring for new phase dynamic optimization of instruction dispatch cluster configuration

    DOEpatents

    Balasubramonian, Rajeev [Sandy, UT; Dwarkadas, Sandhya [Rochester, NY; Albonesi, David [Ithaca, NY

    2012-01-24

    In a processor having multiple clusters which operate in parallel, the number of clusters in use can be varied dynamically. At the start of each program phase, the configuration option for an interval is run to determine the optimal configuration, which is used until the next phase change is detected. The optimum instruction interval is determined by starting with a minimum interval and doubling it until a low stability factor is reached.

  8. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  9. A methodological approach for designing a usable ontology-based GUI in healthcare.

    PubMed

    Lasierra, N; Kushniruk, A; Alesanco, A; Borycki, E; García, J

    2013-01-01

    This paper presents a methodological approach to the design and evaluation of an interface for an ontology-based system used for designing care plans for monitoring patients at home. In order to define the care plans, physicians need a tool for creating instances of the ontology and configuring some rules. Our purpose is to develop an interface to allow clinicians to interact with the ontology. Although ontology-driven applications do not necessarily present the ontology in the user interface, it is our hypothesis that showing selected parts of the ontology in a "usable" way could enhance clinician's understanding and make easier the definition of the care plans. Based on prototyping and iterative testing, this methodology combines visualization techniques and usability methods. Preliminary results obtained after a formative evaluation indicate the effectiveness of suggested combination.

  10. A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.

    2006-01-01

    The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.

  11. The MSG Central Facility - A Mission Control System for Windows NT

    NASA Astrophysics Data System (ADS)

    Thompson, R.

    The MSG Central Facility, being developed by Science Systems for EUMETSAT1, represents the first of a new generation of satellite mission control systems, based on the Windows NT operating system. The system makes use of a range of new technologies to provide an integrated environment for the planning, scheduling, control and monitoring of the entire Meteosat Second Generation mission. It supports packetised TM/TC and uses Science System's Space UNiT product to provide automated operations support at both Schedule (Timeline) and Procedure levels. Flexible access to historical data is provided through an operations archive based on ORACLE Enterprise Server, hosted on a large RAID array and off-line tape jukebox. Event driven real-time data distribution is based on the CORBA standard. Operations preparation and configuration control tools form a fully integrated element of the system.

  12. General Mission Analysis Tool (GMAT) User's Guide (Draft)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    4The General Mission Analysis Tool (GMAT) is a space trajectory optimization and mission analysis system. This document is a draft of the users guide for the tool. Included in the guide is information about Configuring Objects/Resources, Object Fields: Quick Look-up Tables, and Commands and Events.

  13. Battery switch for downhole tools

    DOEpatents

    Boling, Brian E.

    2010-02-23

    An electrical circuit for a downhole tool may include a battery, a load electrically connected to the battery, and at least one switch electrically connected in series with the battery and to the load. The at least one switch may be configured to close when a tool temperature exceeds a selected temperature.

  14. On-line Monitoring for Cutting Tool Wear Condition Based on the Parameters

    NASA Astrophysics Data System (ADS)

    Han, Fenghua; Xie, Feng

    2017-07-01

    In the process of cutting tools, it is very important to monitor the working state of the tools. On the basis of acceleration signal acquisition under the constant speed, time domain and frequency domain analysis of relevant indicators monitor the online of tool wear condition. The analysis results show that the method can effectively judge the tool wear condition in the process of machining. It has certain application value.

  15. Design of smart sensing components for volcano monitoring

    USGS Publications Warehouse

    Xu, M.; Song, W.-Z.; Huang, R.; Peng, Y.; Shirazi, B.; LaHusen, R.; Kiely, A.; Peterson, N.; Ma, A.; Anusuya-Rangappa, L.; Miceli, M.; McBride, D.

    2009-01-01

    In a volcano monitoring application, various geophysical and geochemical sensors generate continuous high-fidelity data, and there is a compelling need for real-time raw data for volcano eruption prediction research. It requires the network to support network synchronized sampling, online configurable sensing and situation awareness, which pose significant challenges on sensing component design. Ideally, the resource usages shall be driven by the environment and node situations, and the data quality is optimized under resource constraints. In this paper, we present our smart sensing component design, including hybrid time synchronization, configurable sensing, and situation awareness. Both design details and evaluation results are presented to show their efficiency. Although the presented design is for a volcano monitoring application, its design philosophy and framework can also apply to other similar applications and platforms. ?? 2009 Elsevier B.V.

  16. Biased insert for installing data transmission components in downhole drilling pipe

    DOEpatents

    Hall, David R [Provo, UT; Briscoe, Michael A [Lehi, UT; Garner, Kory K [Payson, UT; Wilde, Tyson J [Spanish Fork, UT

    2007-04-10

    An apparatus for installing data transmission hardware in downhole tools includes an insert insertable into the box end or pin end of drill tool, such as a section of drill pipe. The insert typically includes a mount portion and a slide portion. A data transmission element is mounted in the slide portion of the insert. A biasing element is installed between the mount portion and the slide portion and is configured to create a bias between the slide portion and the mount portion. This biasing element is configured to compensate for varying tolerances encountered in different types of downhole tools. In selected embodiments, the biasing element is an elastomeric material, a spring, compressed gas, or a combination thereof.

  17. xCELLigence system for real-time label-free monitoring of growth and viability of cell lines from hematological malignancies.

    PubMed

    Martinez-Serra, Jordi; Gutierrez, Antonio; Muñoz-Capó, Saúl; Navarro-Palou, María; Ros, Teresa; Amat, Juan Carlos; Lopez, Bernardo; Marcus, Toni F; Fueyo, Laura; Suquia, Angela G; Gines, Jordi; Rubio, Francisco; Ramos, Rafael; Besalduch, Joan

    2014-01-01

    The xCELLigence system is a new technological approach that allows the real-time cell analysis of adherent tumor cells. To date, xCELLigence has not been able to monitor the growth or cytotoxicity of nonadherent cells derived from hematological malignancies. The basis of its technology relies on the use of culture plates with gold microelectrodes located in their base. We have adapted the methodology described by others to xCELLigence, based on the pre-coating of the cell culture surface with specific substrates, some of which are known to facilitate cell adhesion in the extracellular matrix. Pre-coating of the culture plates with fibronectin, compared to laminin, collagen, or gelatin, significantly induced the adhesion of most of the leukemia/lymphoma cells assayed (Jurkat, L1236, KMH2, and K562). With a fibronectin substrate, nonadherent cells deposited in a monolayer configuration, and consequently, the cell growth and viability were robustly monitored. We further demonstrate the feasibility of xCELLigence for the real-time monitoring of the cytotoxic properties of several antineoplastic agents. In order to validate this technology, the data obtained through real-time cell analysis was compared with that obtained from using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide method. This provides an excellent label-free tool for the screening of drug efficacy in nonadherent cells and discriminates optimal time points for further molecular analysis of cellular events associated with treatments, reducing both time and costs.

  18. Warfighter Integrated Physical Ergonomics Tool Development: Needs Analysis and State of the Art Review

    DTIC Science & Technology

    2011-03-01

    Humansystems® Warfighter Integrated Physical Ergonomics Tool Development Page 14 e) Forces: Griffon seat design assessments include questions of vibration...the suitability of alternative designs . Humansystems® Warfighter Integrated Physical Ergonomics Tool Development Page 5 e) Performance Measures...configurations to assess Humansystems® Warfighter Integrated Physical Ergonomics Tool Development Page 8 design and acquisition decisions, and more

  19. Off-grid MEMS sensors configurations for transportation applications.

    DOT National Transportation Integrated Search

    2013-10-01

    The worsening problem of aging and deficient infrastructure in this nation and across the world has demonstrated the need for an improved system to monitor and maintain these structures. The field of structural health monitoring has grown in recent y...

  20. Nonverbal communication in doctor-elderly patient transactions (NDEPT): development of a tool.

    PubMed

    Gorawara-Bhat, Rita; Cook, Mary Ann; Sachs, Greg A

    2007-05-01

    There are several measurement tools to assess verbal dimensions in clinical encounters; in contrast, there is no established tool to evaluate physical nonverbal dimensions in geriatric encounters. The present paper describes the development of a tool to assess the physical context of exam rooms in doctor-older patient visits. Salient features of the tool were derived from the medical literature and systematic observations of videotapes and refined during current research. The tool consists of two main dimensions of exam rooms: (1) physical dimensions comprising static and dynamic attributes that become operational through the spatial configuration and can influence the manifestation of (2) kinesic attributes. Details of the coding form and inter-rater reliability are presented. The usefulness of the tool is demonstrated through an analysis of 50 National Institute of Aging videotapes. Physicians in exam rooms with no desk in the interaction, no height difference and optimal interaction distance were observed to have greater eye contact and touch than physicians' in exam rooms with a desk, similar height difference and interaction distance. The tool can enable physicians to assess the spatial configuration of exam rooms (through Parts A and B) and thus facilitate the structuring of kinesic attributes (Part C).

  1. nQuire: Technological Support for Personal Inquiry Learning

    ERIC Educational Resources Information Center

    Mulholland, P.; Anastopoulou, S.; Collins, T.; Feisst, M.; Gaved, M.; Kerawalla, L.; Paxton, M.; Scanlon, E.; Sharples, M.; Wright, M.

    2012-01-01

    This paper describes the development of nQuire, a software application to guide personal inquiry learning. nQuire provides teacher support for authoring, orchestrating, and monitoring inquiries as well as student support for carrying out, configuring, and reviewing inquiries. nQuire allows inquiries to be scripted and configured in various ways,…

  2. SCAILET - An intelligent assistant for satellite ground terminal operations

    NASA Technical Reports Server (NTRS)

    Shahidi, A. K.; Crapo, J. A.; Schlegelmilch, R. F.; Reinhart, R. C.; Petrik, E. J.; Walters, J. L.; Jones, R. E.

    1992-01-01

    Space communication artificial intelligence for the link evaluation terminal (SCAILET) is an experimenter interface to the link evaluation terminal (LET) developed by NASA through the application of artificial intelligence to an advanced ground terminal. The high-burst-rate (HBR) LET provides the required capabilities for wideband communications experiments with the advanced communications technology satellite (ACTS). The HBR-LET terminal consists of seven major subsystems and is controlled and monitored by a minicomputer through an IEEE-488 or RS-232 interface. Programming scripts configure HBR-LET and allow data acquisition but are difficult to use and therefore the full capabilities of the system are not utilized. An intelligent assistant module was developed as part of the SCAILET module and solves problems encountered during configuration of the HBR-LET system. This assistant is a graphical interface with an expert system running in the background and allows users to configure instrumentation, program sequences and reference documentation. The simplicity of use makes SCAILET a superior interface to the ASCII terminal and continuous monitoring allows nearly flawless configuration and execution of HBR-LET experiments.

  3. Managing configuration software of ground software applications with glueware

    NASA Technical Reports Server (NTRS)

    Larsen, B.; Herrera, R.; Sesplaukis, T.; Cheng, L.; Sarrel, M.

    2003-01-01

    This paper reports on a simple, low-cost effort to streamline the configuration of the uplink software tools. Even though the existing ground system consisted of JPL and custom Cassini software rather than COTS, we chose a glueware approach--reintegrating with wrappers and bridges and adding minimal new functionality.

  4. Determination of absolute configuration of natural products: theoretical calculation of electronic circular dichroism as a tool

    USDA-ARS?s Scientific Manuscript database

    Determination of absolute configuration (AC) is one of the most challenging features in the structure elucidation of chiral natural products, especially those with complex structures. With revolutionary advancements in the area of quantum chemical calculations of chiroptical spectroscopy over the pa...

  5. Qualitative Network Analysis Tools for the Configurative Articulation of Cultural Value and Impact from Research

    ERIC Educational Resources Information Center

    Oancea, Alis; Florez Petour, Teresa; Atkinson, Jeanette

    2017-01-01

    This article introduces a methodological approach for articulating and communicating the impact and value of research: qualitative network analysis using collaborative configuration tracing and visualization. The approach was proposed initially in Oancea ("Interpretations and Practices of Research Impact across the Range of Disciplines…

  6. Runtime Performance Monitoring Tool for RTEMS System Software

    NASA Astrophysics Data System (ADS)

    Cho, B.; Kim, S.; Park, H.; Kim, H.; Choi, J.; Chae, D.; Lee, J.

    2007-08-01

    RTEMS is a commercial-grade real-time operating system that supports multi-processor computers. However, there are not many development tools for RTEMS. In this paper, we report new RTEMS-based runtime performance monitoring tool. We have implemented a light weight runtime monitoring task with an extension to the RTEMS APIs. Using our tool, software developers can verify various performance- related parameters during runtime. Our tool can be used during software development phase and in-orbit operation as well. Our implemented target agent is light weight and has small overhead using SpaceWire interface. Efforts to reduce overhead and to add other monitoring parameters are currently under research.

  7. Ground Data System Analysis Tools to Track Flight System State Parameters for the Mars Science Laboratory (MSL) and Beyond

    NASA Technical Reports Server (NTRS)

    Allard, Dan; Deforrest, Lloyd

    2014-01-01

    Flight software parameters enable space mission operators fine-tuned control over flight system configurations, enabling rapid and dynamic changes to ongoing science activities in a much more flexible manner than can be accomplished with (otherwise broadly used) configuration file based approaches. The Mars Science Laboratory (MSL), Curiosity, makes extensive use of parameters to support complex, daily activities via commanded changes to said parameters in memory. However, as the loss of Mars Global Surveyor (MGS) in 2006 demonstrated, flight system management by parameters brings with it risks, including the possibility of losing track of the flight system configuration and the threat of invalid command executions. To mitigate this risk a growing number of missions have funded efforts to implement parameter tracking parameter state software tools and services including MSL and the Soil Moisture Active Passive (SMAP) mission. This paper will discuss the engineering challenges and resulting software architecture of MSL's onboard parameter state tracking software and discuss the road forward to make parameter management tools suitable for use on multiple missions.

  8. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

    NASA Astrophysics Data System (ADS)

    RIngenburg, Michael F.

    Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.

  9. Performance and Sizing Tool for Quadrotor Biplane Tailsitter UAS

    NASA Astrophysics Data System (ADS)

    Strom, Eric

    The Quadrotor-Biplane-Tailsitter (QBT) configuration is the basis for a mechanically simplistic rotorcraft capable of both long-range, high-speed cruise as well as hovering flight. This work presents the development and validation of a set of preliminary design tools built specifically for this aircraft to enable its further development, including: a QBT weight model, preliminary sizing framework, and vehicle analysis tools. The preliminary sizing tool presented here shows the advantage afforded by QBT designs in missions with aggressive cruise requirements, such as offshore wind turbine inspections, wherein transition from a quadcopter configuration to a QBT allows for a 5:1 trade of battery weight for wing weight. A 3D, unsteady panel method utilizing a nonlinear implementation of the Kutta-Joukowsky condition is also presented as a means of computing aerodynamic interference effects and, through the implementation of rotor, body, and wing geometry generators, is prepared for coupling with a comprehensive rotor analysis package.

  10. PT-SAFE: a software tool for development and annunciation of medical audible alarms.

    PubMed

    Bennett, Christopher L; McNeer, Richard R

    2012-03-01

    Recent reports by The Joint Commission as well as the Anesthesia Patient Safety Foundation have indicated that medical audible alarm effectiveness needs to be improved. Several recent studies have explored various approaches to improving the audible alarms, motivating the authors to develop real-time software capable of comparing such alarms. We sought to devise software that would allow for the development of a variety of audible alarm designs that could also integrate into existing operating room equipment configurations. The software is meant to be used as a tool for alarm researchers to quickly evaluate novel alarm designs. A software tool was developed for the purpose of creating and annunciating audible alarms. The alarms consisted of annunciators that were mapped to vital sign data received from a patient monitor. An object-oriented approach to software design was used to create a tool that is flexible and modular at run-time, can annunciate wave-files from disk, and can be programmed with MATLAB by the user to create custom alarm algorithms. The software was tested in a simulated operating room to measure technical performance and to validate the time-to-annunciation against existing equipment alarms. The software tool showed efficacy in a simulated operating room environment by providing alarm annunciation in response to physiologic and ventilator signals generated by a human patient simulator, on average 6.2 seconds faster than existing equipment alarms. Performance analysis showed that the software was capable of supporting up to 15 audible alarms on a mid-grade laptop computer before audio dropouts occurred. These results suggest that this software tool provides a foundation for rapidly staging multiple audible alarm sets from the laboratory to a simulation environment for the purpose of evaluating novel alarm designs, thus producing valuable findings for medical audible alarm standardization.

  11. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.

  12. Green roof rainfall-runoff modelling: is the comparison between conceptual and physically based approaches relevant?

    NASA Astrophysics Data System (ADS)

    Versini, Pierre-Antoine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    Green roofs are commonly considered as efficient tools to mitigate urban runoff as they can store precipitation, and consequently provide retention and detention performances. Designed as a compromise between water holding capacity, weight and hydraulic conductivity, their substrate is usually an artificial media differentiating significantly from a traditional soil. In order to assess green roofs hydrological performances, many models have been developed. Classified into two categories (conceptual and physically based), they are usually applied to reproduce the discharge of a particular monitored green roof considered as homogeneous. Although the resulted simulations could be satisfactory, the question of robustness and consistency of the calibrated parameters is often not addressed. Here, a modeling framework has been developed to assess the efficiency and the robustness of both modelling approaches (conceptual and physically based) in reproducing green roof hydrological behaviour. SWMM and VS2DT models have been used for this purpose. This work also benefits from an experimental setup where several green roofs differentiated by their substrate thickness and vegetation cover are monitored. Based on the data collected for several rainfall events, it has been studied how the calibrated parameters are effectively linked to their physical properties and how they can vary from one green roof configuration to another. Although both models reproduce correctly the observed discharges in most of the cases, their calibrated parameters exhibit a high inconsistency. For a same green roof configuration, these parameters can vary significantly from one rainfall event to another, even if they are supposed to be linked to the green roof characteristics (roughness, residual moisture content for instance). They can also be different from one green roof configuration to another although the implemented substrate is the same. Finally, it appears very difficult to find any relationship between the calibrated parameters supposed to represent similar characteristics in both models (porosity, hydraulic conductivity). These results illustrate the difficulty to reproduce the hydrological behaviour of such an artificial media constituting green roof substrate. They justify the development of new methods able to take to into account the spatial heterogeneity of the substrate for instance.

  13. Friction Stir Welding of Tapered Thickness Welds Using an Adjustable Pin Tool

    NASA Technical Reports Server (NTRS)

    Adams, Glynn; Venable, Richard; Lawless, Kirby

    2003-01-01

    Friction stir welding (FSW) can be used for joining weld lands that vary in thickness along the length of the weld. An adjustable pin tool mechanism can be used to accomplish this in a single-pass, full-penetration weld by providing for precise changes in the pin length relative to the shoulder face during the weld process. The difficulty with this approach is in accurately adjusting the pin length to provide a consistent penetration ligament throughout the weld. The weld technique, control system, and instrumentation must account for mechanical and thermal compliances of the tooling system to conduct tapered welds successfully. In this study, a combination of static and in-situ measurements, as well as active control, is used to locate the pin accurately and maintain the desired penetration ligament. Frictional forces at the pin/shoulder interface were a source of error that affected accurate pin position. A traditional FSW pin tool design that requires a lead angle was used to join butt weld configurations that included both constant thickness and tapered sections. The pitch axis of the tooling was fixed throughout the weld; therefore, the effective lead angle in the tapered sections was restricted to within the tolerances allowed by the pin tool design. The sensitivity of the FSW process to factors such as thickness offset, joint gap, centerline offset, and taper transition offset were also studied. The joint gap and the thickness offset demonstrated the most adverse affects on the weld quality. Two separate tooling configurations were used to conduct tapered thickness welds successfully. The weld configurations included sections in which the thickness decreased along the weld, as well as sections in which the thickness increased along the weld. The data presented here include weld metallography, strength data, and process load data.

  14. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  15. Methods and Best Practice to Intercompare Dissolved Oxygen Sensors and Fluorometers/Turbidimeters for Oceanographic Applications.

    PubMed

    Pensieri, Sara; Bozzano, Roberto; Schiano, M Elisabetta; Ntoumas, Manolis; Potiris, Emmanouil; Frangoulis, Constantin; Podaras, Dimitrios; Petihakis, George

    2016-05-17

    In European seas, ocean monitoring strategies in terms of key parameters, space and time scale vary widely for a range of technical and economic reasons. Nonetheless, the growing interest in the ocean interior promotes the investigation of processes such as oxygen consumption, primary productivity and ocean acidity requiring that close attention is paid to the instruments in terms of measurement setup, configuration, calibration, maintenance procedures and quality assessment. To this aim, two separate hardware and software tools were developed in order to test and simultaneously intercompare several oxygen probes and fluorometers/turbidimeters, respectively in the same environmental conditions, with a configuration as close as possible to real in-situ deployment. The chamber designed to perform chlorophyll-a and turbidity tests allowed for the simultaneous acquisition of analogue and digital signals of several sensors at the same time, so it was sufficiently compact to be used in both laboratory and onboard vessels. Methodologies and best practice committed to the intercomparison of dissolved oxygen sensors and fluorometers/turbidimeters have been used, which aid in the promotion of interoperability to access key infrastructures, such as ocean observatories and calibration facilities. Results from laboratory tests as well as field tests in the Mediterranean Sea are presented.

  16. Methods and Best Practice to Intercompare Dissolved Oxygen Sensors and Fluorometers/Turbidimeters for Oceanographic Applications

    PubMed Central

    Pensieri, Sara; Bozzano, Roberto; Schiano, M. Elisabetta; Ntoumas, Manolis; Potiris, Emmanouil; Frangoulis, Constantin; Podaras, Dimitrios; Petihakis, George

    2016-01-01

    In European seas, ocean monitoring strategies in terms of key parameters, space and time scale vary widely for a range of technical and economic reasons. Nonetheless, the growing interest in the ocean interior promotes the investigation of processes such as oxygen consumption, primary productivity and ocean acidity requiring that close attention is paid to the instruments in terms of measurement setup, configuration, calibration, maintenance procedures and quality assessment. To this aim, two separate hardware and software tools were developed in order to test and simultaneously intercompare several oxygen probes and fluorometers/turbidimeters, respectively in the same environmental conditions, with a configuration as close as possible to real in-situ deployment. The chamber designed to perform chlorophyll-a and turbidity tests allowed for the simultaneous acquisition of analogue and digital signals of several sensors at the same time, so it was sufficiently compact to be used in both laboratory and onboard vessels. Methodologies and best practice committed to the intercomparison of dissolved oxygen sensors and fluorometers/turbidimeters have been used, which aid in the promotion of interoperability to access key infrastructures, such as ocean observatories and calibration facilities. Results from laboratory tests as well as field tests in the Mediterranean Sea are presented. PMID:27196908

  17. Content and functional specifications for a standards-based multidisciplinary rounding tool to maintain continuity across acute and critical care.

    PubMed

    Collins, Sarah; Hurley, Ann C; Chang, Frank Y; Illa, Anisha R; Benoit, Angela; Laperle, Sarah; Dykes, Patricia C

    2014-01-01

    Maintaining continuity of care (CoC) in the inpatient setting is dependent on aligning goals and tasks with the plan of care (POC) during multidisciplinary rounds (MDRs). A number of locally developed rounding tools exist, yet there is a lack of standard content and functional specifications for electronic tools to support MDRs within and across settings. To identify content and functional requirements for an MDR tool to support CoC. We collected discrete clinical data elements (CDEs) discussed during rounds for 128 acute and critical care patients. To capture CDEs, we developed and validated an iPad-based observational tool based on informatics CoC standards. We observed 19 days of rounds and conducted eight group and individual interviews. Descriptive and bivariate statistics and network visualization were conducted to understand associations between CDEs discussed during rounds with a particular focus on the POC. Qualitative data were thematically analyzed. All analyses were triangulated. We identified the need for universal and configurable MDR tool views across settings and users and the provision of messaging capability. Eleven empirically derived universal CDEs were identified, including four POC CDEs: problems, plan, goals, and short-term concerns. Configurable POC CDEs were: rationale, tasks/'to dos', pending results and procedures, discharge planning, patient preferences, need for urgent review, prognosis, and advice/guidance. Some requirements differed between settings; yet, there was overlap between POC CDEs. We recommend an initial list of 11 universal CDEs for continuity in MDRs across settings and 27 CDEs that can be configured to meet setting-specific needs.

  18. Autonomous Mission Operations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Spirkovska, Lilijana; McCann, Rob; Wang, Lui; Pohlkamp, Kara; Morin, Lee

    2012-01-01

    NASA's Advanced Exploration Systems Autonomous Mission Operations (AMO) project conducted an empirical investigation of the impact of time-delay on todays mission operations, and of the effect of processes and mission support tools designed to mitigate time-delay related impacts. Mission operation scenarios were designed for NASA's Deep Space Habitat (DSH), an analog spacecraft habitat, covering a range of activities including nominal objectives, DSH system failures, and crew medical emergencies. The scenarios were simulated at time-delay values representative of Lunar (1.2-5 sec), Near Earth Object (NEO) (50 sec) and Mars (300 sec) missions. Each combination of operational scenario and time-delay was tested in a Baseline configuration, designed to reflect present-day operations of the International Space Station, and a Mitigation configuration in which a variety of software tools, information displays, and crew-ground communications protocols were employed to assist both crews and Flight Control Team (FCT) members with the long-delay conditions. Preliminary findings indicate: 1) Workload of both crew members and FCT members generally increased along with increasing time delay. 2) Advanced procedure execution viewers, caution and warning tools, and communications protocols such as text messaging decreased the workload of both flight controllers and crew, and decreased the difficulty of coordinating activities. 3) Whereas crew workload ratings increased between 50 sec and 300 sec of time-delay in the Baseline configuration, workload ratings decreased (or remained flat) in the Mitigation configuration.

  19. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Volume 1: Study results

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    A variety of artificial intelligence techniques which could be used with regard to NASA space applications and robotics were evaluated. The techniques studied were decision tree manipulators, problem solvers, rule based systems, logic programming languages, representation language languages, and expert systems. The overall structure of a robotic simulation tool was defined and a framework for that tool developed. Nonlinear and linearized dynamics equations were formulated for n link manipulator configurations. A framework for the robotic simulation was established which uses validated manipulator component models connected according to a user defined configuration.

  20. The Wettzell System Monitoring Concept and First Realizations

    NASA Technical Reports Server (NTRS)

    Ettl, Martin; Neidhardt, Alexander; Muehlbauer, Matthias; Ploetz, Christian; Beaudoin, Christopher

    2010-01-01

    Automated monitoring of operational system parameters for the geodetic space techniques is becoming more important in order to improve the geodetic data and to ensure the safety and stability of automatic and remote-controlled observations. Therefore, the Wettzell group has developed the system monitoring software, SysMon, which is based on a reliable, remotely-controllable hardware/software realization. A multi-layered data logging system based on a fanless, robust industrial PC with an internal database system is used to collect data from several external, serial, bus, or PCI-based sensors. The internal communication is realized with Remote Procedure Calls (RPC) and uses generative programming with the interface software generator idl2rpc.pl developed at Wettzell. Each data monitoring stream can be configured individually via configuration files to define the logging rates or analog-digital-conversion parameters. First realizations are currently installed at the new laser ranging system at Wettzell to address safety issues and at the VLBI station O Higgins as a meteorological data logger. The system monitoring concept should be realized for the Wettzell radio telescope in the near future.

  1. Guidelines and standard procedures for continuous water-quality monitors: Station operation, record computation, and data reporting

    USGS Publications Warehouse

    Wagner, Richard J.; Boulger, Robert W.; Oblinger, Carolyn J.; Smith, Brett A.

    2006-01-01

    The U.S. Geological Survey uses continuous water-quality monitors to assess the quality of the Nation's surface water. A common monitoring-system configuration for water-quality data collection is the four-parameter monitoring system, which collects temperature, specific conductance, dissolved oxygen, and pH data. Such systems also can be configured to measure other properties, such as turbidity or fluorescence. Data from sensors can be used in conjunction with chemical analyses of samples to estimate chemical loads. The sensors that are used to measure water-quality field parameters require careful field observation, cleaning, and calibration procedures, as well as thorough procedures for the computation and publication of final records. This report provides guidelines for site- and monitor-selection considerations; sensor inspection and calibration methods; field procedures; data evaluation, correction, and computation; and record-review and data-reporting processes, which supersede the guidelines presented previously in U.S. Geological Survey Water-Resources Investigations Report WRIR 00-4252. These procedures have evolved over the past three decades, and the process continues to evolve with newer technologies.

  2. Development of a high resolution plantar pressure monitoring pad based on fiber Bragg grating (FBG) sensors.

    PubMed

    Suresh, R; Bhalla, S; Hao, J; Singh, C

    2015-01-01

    High importance is given to plantar pressure monitoring in the field of biomedical engineering for the diagnosis of posture related ailments associated with diseases such as diabetes and gonarthrosis. This paper presents the proof-of-concept development of a new high resolution plantar pressure monitoring pad based on fiber Bragg grating (FBG) sensors. In the proposed configuration, the FBG sensors are embedded within layers of carbon composite material (CCM) in turn conforming to an arc shape. A total of four such arc shaped sensors are instrumented in the pad at the locations of the forefoot and the hind foot. As a test of the pad, static plantar pressure is monitored on normal subjects under various posture conditions. The pad is evaluated both as a standalone platform as well as a pad inserted inside a standard shoe. An average pressure sensitivity of 1.2 pm/kPa and a resolution of approximately 0.8 kPa is obtained in this special configuration. The pad is found to be suitable in both configurations- stand-alone pad as well as an insert inside a standard shoe. The proposed set up offers a cost-effective high resolution and accurate plantar pressure measurement system suitable for clinical deployment. The novelty of the developed pressure pad lies in its ability to be used both as platform type as well as inserted in-sole type sensor system.

  3. Environmental Monitoring Instrumentation and Monitoring Techniques for Space Shuttle Launches.

    DTIC Science & Technology

    1983-07-01

    Monitoring Instrumentation 32 1. Chemiluminescence HCl 32 2. Passive Dosimeter 34 3. Piezoelectric Quartz Crystal Microbalance 34 iJ ,- r, T , .{ , , : , Z...Sensing for STS Launohes 44 IV. SUISIAiR AND CONCLUSIONS 45 V. IBCOIMXIONS 47 References 49 Appendix A - Dosimeter Tube Monitoring Results 52 B - TenaxR...Monitoring Results 6 3 Summary of GBOMET HCI Data for the Launches of STS-i through 8 STS-5 at KSC 4 Dosimeter Tube Inlet Configuration Comparison 14 5 pH

  4. Impact of novel shift handle laparoscopic tool on wrist ergonomics and task performance

    PubMed Central

    Yu, Denny; Lowndes, Bethany; Morrow, Missy; Kaufman, Kenton; Bingener, Juliane; Hallbeck, Susan

    2015-01-01

    Background Laparoscopic tool handles causing wrist flexion and extension more than 15° from neutral are considered “at-risk” for musculoskeletal strain. Therefore this study measured the impact of laparoscopic tool handle angles on wrist postures and task performance. Methods Eight surgeons performed standard and modified Fundamentals of Laparoscopic Surgery (FLS) tasks with laparoscopic tools. Tool A had three adjustable handle angle configurations, i.e., in-line 0° (A0), 30° (A30), and pistol-grip 70° (A70). Tool B was a fixed pistol-grip grasper. Participants performed FLS peg transfer, inverted peg transfer, and inverted circle-cut with each tool and handle angle. Inverted tasks were adapted from standard FLS tasks to simulate advanced tasks observed during abdominal wall surgeries, e.g., ventral hernia. Motion tracking, video-analysis, and modified NASA-TLX workload questionnaires were used to measure postures, performance (e.g., completion time and errors), and workload. Results Task performance did not differ among tools. For FLS peg transfer, self-reported physical workload was lower for B than A70, and mean wrist postures showed significantly higher flexion for in-line than pistol-grip tools (B and A70). For inverted peg transfer, workload was higher for all configurations. However, less time was spent in at-risk wrist postures for in-line (47%) than pistol-grip (93-94%), and most participants preferred Tool A. For inverted circle cut, workload did not vary across configurations, mean wrist posture was 10° closer to neutral for A0 than B, and median time in at-risk wrist postures was significantly less for A0 (43%) than B (87%). Conclusion The best ergonomic wrist positions for FLS (floor) tasks are provided by pistol-grip tools and for tasks on the abdominal wall (ventral surface) by in-line handles. Adjustable handle angle laparoscopic tools can reduce ergonomic risks for musculoskeletal strain and allow versatility for tasks alternating between the floor and ceiling positions in a surgical trainer without impacting performance. PMID:26541720

  5. Impact of novel shift handle laparoscopic tool on wrist ergonomics and task performance.

    PubMed

    Yu, Denny; Lowndes, Bethany; Morrow, Missy; Kaufman, Kenton; Bingener, Juliane; Hallbeck, Susan

    2016-08-01

    Laparoscopic tool handles causing wrist flexion and extension more than 15° from neutral are considered "at risk" for musculoskeletal strain. Therefore, this study measured the impact of laparoscopic tool handle angles on wrist postures and task performance. Eight surgeons performed standard and modified Fundamentals of Laparoscopic Surgery (FLS) tasks with laparoscopic tools. Tool A had three adjustable handle angle configurations, i.e., in-line 0° (A0), 30° (A30), and pistol-grip 70° (A70). Tool B was a fixed pistol-grip grasper. Participants performed FLS peg transfer, inverted peg transfer, and inverted circle cut with each tool and handle angle. Inverted tasks were adapted from standard FLS tasks to simulate advanced tasks observed during abdominal wall surgeries, e.g., ventral hernia. Motion tracking, video analysis, and modified NASA-TLX workload questionnaires were used to measure postures, performance (e.g., completion time and errors), and workload. Task performance did not differ between tools. For FLS peg transfer, self-reported physical workload was lower for B than for A70, and mean wrist postures showed significantly higher flexion for in-line than for pistol-grip tools (B and A70). For inverted peg transfer, workload was higher for all configurations. However, less time was spent in at-risk wrist postures for in-line (47 %) than for pistol-grip (93-94 %), and most participants preferred Tool A. For inverted circle cut, workload did not vary across configurations, mean wrist posture was 10° closer to neutral for A0 than B, and median time in at-risk wrist postures was significantly less for A0 (43 %) than for B (87 %). The best ergonomic wrist positions for FLS (floor) tasks are provided by pistol-grip tools and for tasks on the abdominal wall (ventral surface) by in-line handles. Adjustable handle angle laparoscopic tools can reduce ergonomic risks of musculoskeletal strain and allow versatility for tasks alternating between the floor and ceiling positions in a surgical trainer without impacting performance.

  6. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Operationalization of Prediction, Hindcast, and Evaluation Systems using the Freie Univ Evaluation System Framework (Freva) incl. a Showcase in Decadal Climate Prediction

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Schartner, Thomas; Ulbrich, Uwe; Cubasch, Ulrich

    2017-04-01

    Operationalization processes are important for Weather and Climate Services. Complex data and work flows need to be combined fast to fulfill the needs of service centers. Standards in data and software formats help in automatic solutions. In this study we show a software solution in between hindcasts, forecasts, and validation to be operationalized. Freva (see below) structures data and evaluation procedures and can easily be monitored. Especially in the development process of operationalized services, Freva supports scientists and project partners. The showcase of the decadal climate prediction project MiKlip (fona-miklip.de) shows such a complex development process. Different predictions, scientists input, tasks, and time evolving adjustments need to be combined to host precise climate informations in a web environment without losing track of its evolution. The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated webshell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gateto the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database ofthe user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Therefore, plugged-in tools benefit from transparency and reproducibility. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  8. A configurable and low-power mixed signal SoC for portable ECG monitoring applications.

    PubMed

    Kim, Hyejung; Kim, Sunyoung; Van Helleputte, Nick; Artes, Antonio; Konijnenburg, Mario; Huisken, Jos; Van Hoof, Chris; Yazicioglu, Refet Firat

    2014-04-01

    This paper describes a mixed-signal ECG System-on-Chip (SoC) that is capable of implementing configurable functionality with low-power consumption for portable ECG monitoring applications. A low-voltage and high performance analog front-end extracts 3-channel ECG signals and single channel electrode-tissue-impedance (ETI) measurement with high signal quality. This can be used to evaluate the quality of the ECG measurement and to filter motion artifacts. A custom digital signal processor consisting of 4-way SIMD processor provides the configurability and advanced functionality like motion artifact removal and R peak detection. A built-in 12-bit analog-to-digital converter (ADC) is capable of adaptive sampling achieving a compression ratio of up to 7, and loop buffer integration reduces the power consumption for on-chip memory access. The SoC is implemented in 0.18 μm CMOS process and consumes 32 μ W from a 1.2 V while heart beat detection application is running, and integrated in a wireless ECG monitoring system with Bluetooth protocol. Thanks to the ECG SoC, the overall system power consumption can be reduced significantly.

  9. New method for remote and repeatable monitoring of intraocular pressure variations.

    PubMed

    Margalit, Israel; Beiderman, Yevgeny; Skaat, Alon; Rosenfeld, Elkanah; Belkin, Michael; Tornow, Ralf-Peter; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2014-02-01

    We present initial steps toward a new measurement device enabling high-precision, noncontact remote and repeatable monitoring of intraocular pressure (IOP)-based on an innovative measurement principle. Using only a camera and a laser source, the device measures IOP by tracking the secondary speckle pattern trajectories produced by the reflection of an illuminating laser beam from the iris or the sclera. The device was tested on rabbit eyes using two different methods to modify IOP: via an infusion bag and via mechanical pressure. In both cases, the eyes were stimulated with increasing and decreasing ramps of the IOP. As IOP variations changed the speckle distributions reflected back from the eye, data were recorded under various optical configurations to define and optimize the best experimental configuration for the IOP extraction. The association between the data provided by our proposed device and that resulting from controlled modification of the IOP was assessed, revealing high correlation (R2=0.98) and sensitivity and providing a high-precision measurement (5% estimated error) for the best experimental configuration. Future steps will be directed toward applying the proposed measurement principle in clinical trials for monitoring IOP with human subjects.

  10. High-strength wastewater treatment in a pure oxygen thermophilic process: 11-year operation and monitoring of different plant configurations.

    PubMed

    Collivignarelli, M C; Bertanza, G; Sordi, M; Pedrazzani, R

    2015-01-01

    This research was carried out on a full-scale pure oxygen thermophilic plant, operated and monitored throughout a period of 11 years. The plant treats 60,000 t y⁻¹ (year 2013) of high-strength industrial wastewaters deriving mainly from pharmaceuticals and detergents production and landfill leachate. Three different plant configurations were consecutively adopted: (1) biological reactor + final clarifier and sludge recirculation (2002-2005); (2) biological reactor + ultrafiltration: membrane biological reactor (MBR) (2006); and (3) MBR + nanofiltration (since 2007). Progressive plant upgrading yielded a performance improvement chemical oxygen demand (COD) removal efficiency was enhanced by 17% and 12% after the first and second plant modification, respectively. Moreover, COD abatement efficiency exhibited a greater stability, notwithstanding high variability of the influent load. In addition, the following relevant outcomes appeared from the plant monitoring (present configuration): up to 96% removal of nitrate and nitrite, due to denitrification; low-specific biomass production (0.092 kgVSS kgCODremoved⁻¹), and biological treatability of residual COD under mesophilic conditions (BOD5/COD ratio = 0.25-0.50), thus showing the complementarity of the two biological processes.

  11. Wireless online position monitoring of manual valve types for plant configuration management in nuclear power plants

    DOE PAGES

    Agarwal, Vivek; Buttles, John W.; Beaty, Lawrence H.; ...

    2016-10-05

    In the current competitive energy market, the nuclear industry is committed to lower the operations and maintenance cost; increase productivity and efficiency while maintaining safe and reliable operation. The present operating model of nuclear power plants is dependent on large technical staffs that put the nuclear industry at long-term economic disadvantage. Technology can play a key role in nuclear power plant configuration management in offsetting labor costs by automating manually performed plant activities. The technology being developed, tested, and demonstrated in this paper will enable the continued safe operation of today’s fleet of light water reactors by providing the technicalmore » means to monitor components in plants today that are only routinely monitored through manual activities. The wireless enabled valve position indicators that are the subject of this paper are able to provide a valid position indication available continuously, rather than only periodically. As a result, a real-time (online) availability of valve positions using an affordable technologies are vital to plant configuration when compared with long-term labor rates, and provide information that can be used for a variety of plant engineering, maintenance, and management applications.« less

  12. Wireless online position monitoring of manual valve types for plant configuration management in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Buttles, John W.; Beaty, Lawrence H.

    In the current competitive energy market, the nuclear industry is committed to lower the operations and maintenance cost; increase productivity and efficiency while maintaining safe and reliable operation. The present operating model of nuclear power plants is dependent on large technical staffs that put the nuclear industry at long-term economic disadvantage. Technology can play a key role in nuclear power plant configuration management in offsetting labor costs by automating manually performed plant activities. The technology being developed, tested, and demonstrated in this paper will enable the continued safe operation of today’s fleet of light water reactors by providing the technicalmore » means to monitor components in plants today that are only routinely monitored through manual activities. The wireless enabled valve position indicators that are the subject of this paper are able to provide a valid position indication available continuously, rather than only periodically. As a result, a real-time (online) availability of valve positions using an affordable technologies are vital to plant configuration when compared with long-term labor rates, and provide information that can be used for a variety of plant engineering, maintenance, and management applications.« less

  13. Configurable technology development for reusable control and monitor ground systems

    NASA Technical Reports Server (NTRS)

    Uhrlaub, David R.

    1994-01-01

    The control monitor unit (CMU) uses configurable software technology for real-time mission command and control, telemetry processing, simulation, data acquisition, data archiving, and ground operations automation. The base technology is currently planned for the following control and monitor systems: portable Space Station checkout systems; ecological life support systems; Space Station logistics carrier system; and the ground system of the Delta Clipper (SX-2) in the Single-Stage Rocket Technology program. The CMU makes extensive use of commercial technology to increase capability and reduce development and life-cycle costs. The concepts and technology are being developed by McDonnell Douglas Space and Defense Systems for the Real-Time Systems Laboratory at NASA's Kennedy Space Center under the Payload Ground Operations Contract. A second function of the Real-Time Systems Laboratory is development and utilization of advanced software development practices.

  14. GEOGLAM Crop Assessment Tool: Adapting from global agricultural monitoring to food security monitoring

    NASA Astrophysics Data System (ADS)

    Humber, M. L.; Becker-Reshef, I.; Nordling, J.; Barker, B.; McGaughey, K.

    2014-12-01

    The GEOGLAM Crop Monitor's Crop Assessment Tool was released in August 2013 in support of the GEOGLAM Crop Monitor's objective to develop transparent, timely crop condition assessments in primary agricultural production areas, highlighting potential hotspots of stress/bumper crops. The Crop Assessment Tool allows users to view satellite derived products, best available crop masks, and crop calendars (created in collaboration with GEOGLAM Crop Monitor partners), then in turn submit crop assessment entries detailing the crop's condition, drivers, impacts, trends, and other information. Although the Crop Assessment Tool was originally intended to collect data on major crop production at the global scale, the types of data collected are also relevant to the food security and rangelands monitoring communities. In line with the GEOGLAM Countries at Risk philosophy of "foster[ing] the coordination of product delivery and capacity building efforts for national and regional organizations, and the development of harmonized methods and tools", a modified version of the Crop Assessment Tool is being developed for the USAID Famine Early Warning Systems Network (FEWS NET). As a member of the Countries at Risk component of GEOGLAM, FEWS NET provides agricultural monitoring, timely food security assessments, and early warnings of potential significant food shortages focusing specifically on countries at risk of food security emergencies. While the FEWS NET adaptation of the Crop Assessment Tool focuses on crop production in the context of food security rather than large scale production, the data collected is nearly identical to the data collected by the Crop Monitor. If combined, the countries monitored by FEWS NET and GEOGLAM Crop Monitor would encompass over 90 countries representing the most important regions for crop production and food security.

  15. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications

    PubMed Central

    Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-01

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777

  16. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications.

    PubMed

    Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-05

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.

  17. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  18. Applied Meteorology Unit Quarterly Report. First Quarter FY-13

    NASA Technical Reports Server (NTRS)

    2013-01-01

    The AMU team worked on five tasks for their customers: (1) Ms. Crawford continued work on the objective lightning forecast task for airports in east-central Florida. (2) Ms. Shafer continued work on the task for Vandenberg Air Force Base to create an automated tool that will help forecasters relate pressure gradients to peak wind values. (3) Dr. Huddleston began work to develop a lightning timing forecast tool for the Kennedy Space Center/Cape Canaveral Air Force Station area. (3) Dr. Bauman began work on a severe weather forecast tool focused on east-central Florida. (4) Dr. Watson completed testing high-resolution model configurations for Wallops Flight Facility and the Eastern Range, and wrote the final report containing the AMU's recommendations for model configurations at both ranges.

  19. Airplane numerical simulation for the rapid prototyping process

    NASA Astrophysics Data System (ADS)

    Roysdon, Paul F.

    Airplane Numerical Simulation for the Rapid Prototyping Process is a comprehensive research investigation into the most up-to-date methods for airplane development and design. Uses of modern engineering software tools, like MatLab and Excel, are presented with examples of batch and optimization algorithms which combine the computing power of MatLab with robust aerodynamic tools like XFOIL and AVL. The resulting data is demonstrated in the development and use of a full non-linear six-degrees-of-freedom simulator. The applications for this numerical tool-box vary from un-manned aerial vehicles to first-order analysis of manned aircraft. A Blended-Wing-Body airplane is used for the analysis to demonstrate the flexibility of the code from classic wing-and-tail configurations to less common configurations like the blended-wing-body. This configuration has been shown to have superior aerodynamic performance -- in contrast to their classic wing-and-tube fuselage counterparts -- and have reduced sensitivity to aerodynamic flutter as well as potential for increased engine noise abatement. Of course without a classic tail elevator to damp the nose up pitching moment, and the vertical tail rudder to damp the yaw and possible rolling aerodynamics, the challenges in lateral roll and yaw stability, as well as pitching moment are not insignificant. This thesis work applies the tools necessary to perform the airplane development and optimization on a rapid basis, demonstrating the strength of this tool through examples and comparison of the results to similar airplane performance characteristics published in literature.

  20. Configural Frequency Analysis as a Statistical Tool for Developmental Research.

    ERIC Educational Resources Information Center

    Lienert, Gustav A.; Oeveste, Hans Zur

    1985-01-01

    Configural frequency analysis (CFA) is suggested as a technique for longitudinal research in developmental psychology. Stability and change in answers to multiple choice and yes-no item patterns obtained with repeated measurements are identified by CFA and illustrated by developmental analysis of an item from Gorham's Proverb Test. (Author/DWH)

  1. Chunking Strategy as a Tool for Teaching Electron Configuration

    ERIC Educational Resources Information Center

    Adhikary, Chandan; Sana, Sibananda; Chattopadhyay, K. N.

    2015-01-01

    Chunk-based strategy and mnemonics have been developed to write ground state electron configurations of elements, which is a routine exercise for the higher secondary (pre-university) level general chemistry students. To assimilate a better understanding of the nature of chemical reactions, an adequate knowledge of the periodic table of elements…

  2. Monitoring of olive oil mills' wastes using electrical resistivity tomography techniques

    NASA Astrophysics Data System (ADS)

    Simyrdanis, Kleanthis; Papadopoulos, Nikos; Kirkou, Stella; Sarris, Apostolos; Tsourlos, Panagiotis

    2014-08-01

    Olive oil mills' wastes (OOMW) are one of the byproducts of the oil production that can lead to serious environmental pollution when they are deposited in ponds dug on the ground surface. Electrical Resistivity Tomography (ERT) method can provide a valuable tool in order to monitor through time the physical flow of the wastes into the subsurface. ERT could potentially locate the electrical signature due to lower resistivity values resulting from the leakage of OOMW to the subsurface. For this purpose, two vertical boreholes were installed (12m depth, 9 m apart) in the vicinity of an existing pond which is filled with OOMW during the oil production period. The test site is situated in Saint Andreas village about 15km south of the city of Rethymno (Crete, Greece). Surface ERT measurements were collected along multiple lines in order to reconstruct the subsurface resistivity models. Data acquisition was performed with standard and optimized electrode configuration protocols. The monitoring survey includes the ERT data collection for a period of time. The study was initiated before the OOMW were deposited in the pond, so resistivity fluctuations are expected due to the flow of OOMW in the porous subsurface media through time. Preliminary results show the good correlation of the ERT images with the drilled geological formations and the identification of low resistivity subsurface zone that could be attributed to the flow of the wastes within the porous layers.

  3. Application of ZigBee sensor network to data acquisition and monitoring

    NASA Astrophysics Data System (ADS)

    Terada, Mitsugu

    2009-01-01

    A ZigBee sensor network for data acquisition and monitoring is presented in this paper. It is configured using a commercially available ZigBee solution. A ZigBee module is connected via a USB interface to a Microsoft Windows PC, which works as a base station in the sensor network. Data collected by remote devices are sent to the base station PC, which is set as a data sink. Each remote device is built of a commercially available ZigBee module product and a sensor. The sensor is a thermocouple connected to a cold junction compensator amplifier. The signal from the amplifier is input to an AD converter port on the ZigBee module. Temperature data are transmitted according to the ZigBee protocol from the remote device to the data sink PC. The data sampling rate is one sampling per second; the highest possible rate is four samplings per second. The data are recorded in the hexadecimal number format by device control software, and the data file is stored in text format on the data sink PC. Time-dependent data changes can be monitored using the macro function of spreadsheet software. The system is considered a useful tool in the field of education, based on the results of trial use for measurement in an undergraduate laboratory class at a university.

  4. Image-based spectroscopy for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  5. Infrared spectroscopic ellipsometry in semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Guittet, Pierre-Yves; Mantz, Ulrich; Weidner, Peter; Stehle, Jean-Louis; Bucchia, Marc; Bourtault, Sophie; Zahorski, Dorian

    2004-05-01

    Infrared spectroscopic ellipsometry (IRSE) metrology is an emerging technology in semiconductor production environment. Infineon Technologies SC300 implemented the first worldwide automated IRSE in a class 1 clean room in 2002. Combining properties of IR light -- large wavelength, low absorption in silicon -- with a short focus optics -- no backside reflection -- which allow model-based analysis, a large number of production applications were developed. Part of Infineon IRSE development roadmap is now focused on depth monitoring for arrays of 3D dry-etched structures. In trench DRAM manufacturing, the areal density is high, and critical dimensions are much lower than mid-IR wavelength. Therefore, extensive use of effective medium theory is made to model 3D structures. IR-SE metrology is not limited by shrinking critical dimensions, as long as the areal density is above a specific cut-off value determined by trenches dimensions, trench-filling and surrounding materials. Two applications for depth monitoring are presented. 1D models were developed and successfully applied to the DRAM trench capacitor structures. Modeling and correlation to reference methods are shown as well as dynamic repeatability and gauge capability results. Limitations of the current tool configuration are reviewed for shallow structures.

  6. Mapping HIV community viral load: space, power and the government of bodies

    PubMed Central

    Gagnon, Marilou; Guta, Adrian

    2012-01-01

    HIV plasma viral load testing has become more than just a clinical tool to monitor treatment response at the individual level. Increasingly, individual HIV plasma viral load testing is being reported to public health agencies and is used to inform epidemiological surveillance and monitor the presence of the virus collectively using techniques to measure ‘community viral load’. This article seeks to formulate a critique and propose a novel way of theorizing community viral load. Based on the salient work of Michel Foucault, especially the governmentality literature, this article critically examines the use of community viral load as a new strategy of government. Drawing also on the work of Miller and Rose, this article explores the deployment of ‘community’ through the re-configuration of space, the problematization of viral concentrations in specific microlocales, and the government (in the Foucauldian sense) of specific bodies which are seen as ‘risky’, dangerous and therefore, in need of attention. It also examines community viral load as a necessary precondition — forming the ‘conditions of possibility’ — for the recent shift to high impact prevention tactics that are being scaled up across North America. PMID:23060688

  7. Tips and Traps: Lessons From Codesigning a Clinician E-Monitoring Tool for Computerized Cognitive Behavioral Therapy

    PubMed Central

    Hawken, Susan J; Stasiak, Karolina; Lucassen, Mathijs FG; Fleming, Theresa; Shepherd, Matthew; Greenwood, Andrea; Osborne, Raechel; Merry, Sally N

    2017-01-01

    Background Computerized cognitive behavioral therapy (cCBT) is an acceptable and promising treatment modality for adolescents with mild-to-moderate depression. Many cCBT programs are standalone packages with no way for clinicians to monitor progress or outcomes. We sought to develop an electronic monitoring (e-monitoring) tool in consultation with clinicians and adolescents to allow clinicians to monitor mood, risk, and treatment adherence of adolescents completing a cCBT program called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts). Objective The objectives of our study were as follows: (1) assess clinicians’ and adolescents’ views on using an e-monitoring tool and to use this information to help shape the development of the tool and (2) assess clinician experiences with a fully developed version of the tool that was implemented in their clinical service. Methods A descriptive qualitative study using semistructured focus groups was conducted in New Zealand. In total, 7 focus groups included clinicians (n=50) who worked in primary care, and 3 separate groups included adolescents (n=29). Clinicians were general practitioners (GPs), school guidance counselors, clinical psychologists, youth workers, and nurses. Adolescents were recruited from health services and a high school. Focus groups were run to enable feedback at 3 phases that corresponded to the consultation, development, and postimplementation stages. Thematic analysis was applied to transcribed responses. Results Focus groups during the consultation and development phases revealed the need for a simple e-monitoring registration process with guides for end users. Common concerns were raised in relation to clinical burden, monitoring risk (and effects on the therapeutic relationship), alongside confidentiality or privacy and technical considerations. Adolescents did not want to use their social media login credentials for e-monitoring, as they valued their privacy. However, adolescents did want information on seeking help and personalized monitoring and communication arrangements. Postimplementation, clinicians who had used the tool in practice revealed no adverse impact on the therapeutic relationship, and adolescents were not concerned about being e-monitored. Clinicians did need additional time to monitor adolescents, and the e-monitoring tool was used in a different way than was originally anticipated. Also, it was suggested that the registration process could be further streamlined and integrated with existing clinical data management systems, and the use of clinician alerts could be expanded beyond the scope of simply flagging adolescents of concern. Conclusions An e-monitoring tool was developed in consultation with clinicians and adolescents. However, the study revealed the complexity of implementing the tool in clinical practice. Of salience were privacy, parallel monitoring systems, integration with existing electronic medical record systems, customization of the e-monitor, and preagreed monitoring arrangements between clinicians and adolescents. PMID:28077345

  8. Development and pilot testing of an online monitoring tool of depression symptoms and side effects for young people being treated for depression.

    PubMed

    Hetrick, Sarah E; Dellosa, Maria Kristina; Simmons, Magenta B; Phillips, Lisa

    2015-02-01

    To develop and examine the feasibility of an online monitoring tool of depressive symptoms, suicidality and side effects. The online tool was developed based on guideline recommendations, and employed already validated and widely used measures. Quantitative data about its use, and qualitative information on its functionality and usefulness were collected from surveys, a focus group and individual interviews. Fifteen young people completed the tool between 1 and 12 times, and reported it was easy to use. Clinicians suggested it was too long and could be completed in the waiting room to lessen impact on session time. Overall, clients and clinicians who used the tool found it useful. Results show that an online monitoring tool is potentially useful as a systematic means for monitoring symptoms, but further research is needed including how to embed the tool within clinical practice. © 2014 Wiley Publishing Asia Pty Ltd.

  9. Integrated Design and Production Reference Integration with ArchGenXML V1.00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barter, R H

    2004-07-20

    ArchGenXML is a tool that allows easy creation of Zope products through the use of Archetypes. The Integrated Design and Production Reference (IDPR) should be highly configurable in order to meet the needs of a diverse engineering community. Ease of configuration is key to the success of IDPR. The purpose of this paper is to describe a method of using a UML diagram editor to configure IDPR through ArchGenXML and Archetypes.

  10. Parametric Study of Biconic Re-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Steele, Bryan; Banks, Daniel W.; Whitmore, Stephen A.

    2007-01-01

    An optimization based on hypersonic aerodynamic performance and volumetric efficiency was accomplished for a range of biconic configurations. Both axisymmetric and quasi-axisymmetric geometries (bent and flattened) were analyzed. The aerodynamic optimization wag based on hypersonic simple Incidence angle analysis tools. The range of configurations included those suitable for r lunar return trajectory with a lifting aerocapture at Earth and an overall volume that could support a nominal crew. The results yielded five configurations that had acceptable aerodynamic performance and met overall geometry and size limitations

  11. Overview of SDCM - The Spacecraft Design and Cost Model

    NASA Technical Reports Server (NTRS)

    Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.

    1988-01-01

    The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.

  12. Mission support plan STS-2

    NASA Technical Reports Server (NTRS)

    Ibanez, F.

    1981-01-01

    The plan defines the anticipated GSTDN/DOD station support and configuration requirements for a nominal flight with an orbital inclination of 38.4 degrees and a circular orbit of 120 nautical miles for the first 5 orbits and 137 nautical miles thereafter. A complete set of preliminary site configuration messages (SCM) define nominal station AOS/LOS times and configurations for S-Band and UHF support. This document is intended for use as a planning tool, providing the necessary guidelines and data base for SCM generation in support of STS-2.

  13. minimega

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Fritz, John Floren

    2013-08-27

    Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.

  14. Validation of structural analysis methods using the in-house liner cyclic rigs

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.

    1982-01-01

    Test conditions and variables to be considered in each of the test rigs and test configurations, and also used in the validation of the structural predictive theories and tools, include: thermal and mechanical load histories (simulating an engine mission cycle; different boundary conditions; specimens and components of different dimensions and geometries; different materials; various cooling schemes and cooling hole configurations; several advanced burner liner structural design concepts; and the simulation of hot streaks. Based on these test conditions and test variables, the test matrices for each rig and configurations can be established to verify the predictive tools over as wide a range of test conditions as possible using the simplest possible tests. A flow chart for the thermal/structural analysis of a burner liner and how the analysis relates to the tests is shown schematically. The chart shows that several nonlinear constitutive theories are to be evaluated.

  15. Cam-controlled boring bar

    DOEpatents

    Glatthorn, Raymond H.

    1986-01-01

    A cam-controlled boring bar system (100) includes a first housing (152) which is rotatable about its longitudinal axis (154), and a second housing in the form of a cam-controlled slide (158) which is also rotatable about the axis (154) as well as being translatable therealong. A tool-holder (180) is mounted within the slide (158) for holding a single point cutting tool. Slide (158) has a rectangular configuration and is disposed within a rectangularly configured portion of the first housing (152). Arcuate cam slots (192) are defined within a side plate (172) of the housing (152), while cam followers (194) are mounted upon the cam slide (158) for cooperative engagement with the cam slots (192). In this manner, as the housing (152) and slide (158) rotate, and as the slide (158) also translates, a through-bore (14) having an hourglass configuration will be formed within a workpiece (16) which may be, for example, a nuclear reactor steam generator tube support plate.

  16. Advanced composite elevator for Boeing 727 aircraft, volume 2

    NASA Technical Reports Server (NTRS)

    Chovil, D. V.; Grant, W. D.; Jamison, E. S.; Syder, H.; Desper, O. E.; Harvey, S. T.; Mccarty, J. E.

    1980-01-01

    Preliminary design activity consisted of developing and analyzing alternate design concepts and selecting the optimum elevator configuration. This included trade studies in which durability, inspectability, producibility, repairability, and customer acceptance were evaluated. Preliminary development efforts consisted of evaluating and selecting material, identifying ancillary structural development test requirements, and defining full scale ground and flight test requirements necessary to obtain Federal Aviation Administration (FAA) certification. After selection of the optimum elevator configuration, detail design was begun and included basic configuration design improvements resulting from manufacturing verification hardware, the ancillary test program, weight analysis, and structural analysis. Detail and assembly tools were designed and fabricated to support a full-scope production program, rather than a limited run. The producibility development programs were used to verify tooling approaches, fabrication processes, and inspection methods for the production mode. Quality parts were readily fabricated and assembled with a minimum rejection rate, using prior inspection methods.

  17. Aggregating concept map data to investigate the knowledge of beginning CS students

    NASA Astrophysics Data System (ADS)

    Mühling, Andreas

    2016-07-01

    Concept maps have a long history in educational settings as a tool for teaching, learning, and assessing. As an assessment tool, they are predominantly used to extract the structural configuration of learners' knowledge. This article presents an investigation of the knowledge structures of a large group of beginning CS students. The investigation is based on a method that collects, aggregates, and automatically analyzes the concept maps of a group of learners as a whole, to identify common structural configurations and differences in the learners' knowledge. It shows that those students who have attended CS education in their secondary school life have, on average, configured their knowledge about typical core CS/OOP concepts differently. Also, artifacts of their particular CS curriculum are visible in their externalized knowledge. The data structures and analysis methods necessary for working with concept landscapes have been implemented as a GNU R package that is freely available.

  18. Comparison of fatigue crack growth of riveted and bonded aircraft lap joints made of Aluminium alloy 2024-T3 substrates - A numerical study

    NASA Astrophysics Data System (ADS)

    Pitta, S.; Rojas, J. I.; Crespo, D.

    2017-05-01

    Aircraft lap joints play an important role in minimizing the operational cost of airlines. Hence, airlines pay more attention to these technologies to improve efficiency. Namely, a major time consuming and costly process is maintenance of aircraft between the flights, for instance, to detect early formation of cracks, monitoring crack growth, and fixing the corresponding parts with joints, if necessary. This work is focused on the study of repairs of cracked aluminium alloy (AA) 2024-T3 plates to regain their original strength; particularly, cracked AA 2024-T3 substrate plates repaired with doublers of AA 2024-T3 with two configurations (riveted and with adhesive bonding) are analysed. The fatigue life of the substrate plates with cracks of 1, 2, 5, 10 and 12.7mm is computed using Fracture Analysis 3D (FRANC3D) tool. The stress intensity factors for the repaired AA 2024-T3 plates are computed for different crack lengths and compared using commercial FEA tool ABAQUS. The results for the bonded repairs showed significantly lower stress intensity factors compared with the riveted repairs. This improves the overall fatigue life of the bonded joint.

  19. HTTP-based remote operational options for the Vacuum Tower Telescope, Tenerife

    NASA Astrophysics Data System (ADS)

    Staiger, J.

    2012-09-01

    We are currently developing network based tools for the Vacuum Tower Telescope (VTT), Tenerife which will allow to operate the telescope together with the newly developed 2D-spectrometer HELLRIDE under remote control conditions. The computational configuration can be viewed as a distributed system linking hardware components of various functionality from different locations. We have developed a communication protocol which is basically an extension of the HTTP standard. It will serve as a carrier for command- and data-transfers. The server-client software is based on Berkley-Unix sockets in a C++ programming environment. A customized CMS will allow to create browser accessible information on-the-fly. Java-based applet pages have been tested as optional user access GUI's. An access tool has been implemented to download near-realtime, web-based target information from NASA/SDO. Latency tests have been carried out at the VTT and the Swedish STT at La Palma for concept verification. Short response times indicate that under favorable network conditions remote interactive telescope handling may be possible. The scientific focus of possible future remote operations will be set on the helioseismology of the solar atmosphere, the monitoring of flares and the footpoint analysis of coronal loops and chromospheric events.

  20. Industrial applications of THz systems

    NASA Astrophysics Data System (ADS)

    Wietzke, S.; Jansen, C.; Jördens, C.; Krumbholz, N.; Vieweg, N.; Scheller, M.; Shakfa, M. K.; Romeike, D.; Hochrein, T.; Mikulics, M.; Koch, M.

    2009-07-01

    Terahertz time-domain spectroscopy (THz TDS) holds high potential as a non-destructive, non-contact testing tool. We have identified a plethora of emerging industrial applications such as quality control of industrial processes and products in the plastics industry. Polymers are transparent to THz waves while additives show a significantly higher permittivity. This dielectric contrast allows for detecting the additive concentration and the degree of dispersion. We present a first inline configuration of a THz TDS spectrometer for monitoring polymeric compounding processes. To evaluate plastic components, non-destructive testing is strongly recommended. For instance, THz imaging is capable of inspecting plastic weld joints or revealing the orientation of fiber reinforcements. Water strongly absorbs THz radiation. However, this sensitivity to water can be employed in order to investigate the moisture absorption in plastics and the water content in plants. Furthermore, applications in food technology are discussed. Moreover, security scanning applications are addressed in terms of identifying liquid explosives. We present the vision and first components of a handheld security scanner. In addition, a new approach for parameter extraction of THz TDS data is presented. All in all, we give an overview how industry can benefit from THz TDS completing the tool box of non-destructive evaluation.

  1. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    NASA Astrophysics Data System (ADS)

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-12-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.

  2. Pure-rotational spectrometry: a vintage analytical method applied to modern breath analysis.

    PubMed

    Hrubesh, Lawrence W; Droege, Michael W

    2013-09-01

    Pure-rotational spectrometry (PRS) is an established method, typically used to study structures and properties of polar gas-phase molecules, including isotopic and isomeric varieties. PRS has also been used as an analytical tool where it is particularly well suited for detecting or monitoring low-molecular-weight species that are found in exhaled breath. PRS is principally notable for its ultra-high spectral resolution which leads to exceptional specificity to identify molecular compounds in complex mixtures. Recent developments using carbon aerogel for pre-concentrating polar molecules from air samples have extended the sensitivity of PRS into the part-per-billion range. In this paper we describe the principles of PRS and show how it may be configured in several different modes for breath analysis. We discuss the pre-concentration concept and demonstrate its use with the PRS analyzer for alcohols and ammonia sampled directly from the breath.

  3. NetMOD Version 2.0 Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    2015-08-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each ofmore » the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This document describes the parameters that are used to configure the NetMOD tool and the input and output parameters that make up the simulation definitions.« less

  4. Reconstructing Folding Energy Landscapes by Single-Molecule Force Spectroscopy

    PubMed Central

    Woodside, Michael T.; Block, Steven M.

    2015-01-01

    Folding may be described conceptually in terms of trajectories over a landscape of free energies corresponding to different molecular configurations. In practice, energy landscapes can be difficult to measure. Single-molecule force spectroscopy (SMFS), whereby structural changes are monitored in molecules subjected to controlled forces, has emerged as a powerful tool for probing energy landscapes. We summarize methods for reconstructing landscapes from force spectroscopy measurements under both equilibrium and nonequilibrium conditions. Other complementary, but technically less demanding, methods provide a model-dependent characterization of key features of the landscape. Once reconstructed, energy landscapes can be used to study critical folding parameters, such as the characteristic transition times required for structural changes and the effective diffusion coefficient setting the timescale for motions over the landscape. We also discuss issues that complicate measurement and interpretation, including the possibility of multiple states or pathways and the effects of projecting multiple dimensions onto a single coordinate. PMID:24895850

  5. Demonstrating artificial intelligence for space systems - Integration and project management issues

    NASA Technical Reports Server (NTRS)

    Hack, Edmund C.; Difilippo, Denise M.

    1990-01-01

    As part of its Systems Autonomy Demonstration Project (SADP), NASA has recently demonstrated the Thermal Expert System (TEXSYS). Advanced real-time expert system and human interface technology was successfully developed and integrated with conventional controllers of prototype space hardware to provide intelligent fault detection, isolation, and recovery capability. Many specialized skills were required, and responsibility for the various phases of the project therefore spanned multiple NASA centers, internal departments and contractor organizations. The test environment required communication among many types of hardware and software as well as between many people. The integration, testing, and configuration management tools and methodologies which were applied to the TEXSYS project to assure its safe and successful completion are detailed. The project demonstrated that artificial intelligence technology, including model-based reasoning, is capable of the monitoring and control of a large, complex system in real time.

  6. A streamlined software environment for situated skills

    NASA Technical Reports Server (NTRS)

    Yu, Sophia T.; Slack, Marc G.; Miller, David P.

    1994-01-01

    This paper documents a powerful set of software tools used for developing situated skills. These situated skills form the reactive level of a three-tiered intelligent agent architecture. The architecture is designed to allow these skills to be manipulated by a task level engine which is monitoring the current situation and selecting skills necessary for the current task. The idea is to coordinate the dynamic activations and deactivations of these situated skills in order to configure the reactive layer for the task at hand. The heart of the skills environment is a data flow mechanism which pipelines the currently active skills for execution. A front end graphical interface serves as a debugging facility during skill development and testing. We are able to integrate skills developed in different languages into the skills environment. The power of the skills environment lies in the amount of time it saves for the programmer to develop code for the reactive layer of a robot.

  7. Adapting HIV patient and program monitoring tools for chronic non-communicable diseases in Ethiopia.

    PubMed

    Letebo, Mekitew; Shiferaw, Fassil

    2016-06-02

    Chronic non-communicable diseases (NCDs) have become a huge public health concern in developing countries. Many resource-poor countries facing this growing epidemic, however, lack systems for an organized and comprehensive response to NCDs. Lack of NCD national policy, strategies, treatment guidelines and surveillance and monitoring systems are features of health systems in many developing countries. Successfully responding to the problem requires a number of actions by the countries, including developing context-appropriate chronic care models and programs and standardization of patient and program monitoring tools. In this cross-sectional qualitative study we assessed existing monitoring and evaluation (M&E) tools used for NCD services in Ethiopia. Since HIV care and treatment program is the only large-scale chronic care program in the country, we explored the M&E tools being used in the program and analyzed how these tools might be adapted to support NCD services in the country. Document review and in-depth interviews were the main data collection methods used. The interviews were held with health workers and staff involved in data management purposively selected from four health facilities with high HIV and NCD patient load. Thematic analysis was employed to make sense of the data. Our findings indicate the apparent lack of information systems for NCD services, including the absence of standardized patient and program monitoring tools to support the services. We identified several HIV care and treatment patient and program monitoring tools currently being used to facilitate intake process, enrolment, follow up, cohort monitoring, appointment keeping, analysis and reporting. Analysis of how each tool being used for HIV patient and program monitoring can be adapted for supporting NCD services is presented. Given the similarity between HIV care and treatment and NCD services and the huge investment already made to implement standardized tools for HIV care and treatment program, adaptation and use of HIV patient and program monitoring tools for NCD services can improve NCD response in Ethiopia through structuring services, standardizing patient care and treatment, supporting evidence-based planning and providing information on effectiveness of interventions.

  8. Low-cost measurement and monitoring system for cryogenic applications

    NASA Astrophysics Data System (ADS)

    Tubío Araújo, Óscar; Hernández Suárez, Elvio; Gracia Temich, Félix

    2016-07-01

    Cryostats are closed chambers that hinder the monitoring of materials, structures or systems installed therein. This paper presents a webcam-based measurement and monitoring system, which can operate under vacuum and cryogenic conditions to be mainly used in astrophysical applications. The system can be configured in two different assemblies: wide field that can be used for mechanism monitoring and narrow field, especially useful in cryogenic precision measurements with a resolution up to 4 microns/pixel.

  9. A Case Study in Software Adaptation

    DTIC Science & Technology

    2002-01-01

    1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to

  10. Effects of spatial configuration of imperviousness and green infrastructure networks on hydrologic response in a residential sewershed

    NASA Astrophysics Data System (ADS)

    Lim, Theodore C.; Welty, Claire

    2017-09-01

    Green infrastructure (GI) is an approach to stormwater management that promotes natural processes of infiltration and evapotranspiration, reducing surface runoff to conventional stormwater drainage infrastructure. As more urban areas incorporate GI into their stormwater management plans, greater understanding is needed on the effects of spatial configuration of GI networks on hydrological performance, especially in the context of potential subsurface and lateral interactions between distributed facilities. In this research, we apply a three-dimensional, coupled surface-subsurface, land-atmosphere model, ParFlow.CLM, to a residential urban sewershed in Washington DC that was retrofitted with a network of GI installations between 2009 and 2015. The model was used to test nine additional GI and imperviousness spatial network configurations for the site and was compared with monitored pipe-flow data. Results from the simulations show that GI located in higher flow-accumulation areas of the site intercepted more surface runoff, even during wetter and multiday events. However, a comparison of the differences between scenarios and levels of variation and noise in monitored data suggests that the differences would only be detectable between the most and least optimal GI/imperviousness configurations.

  11. A vision-based tool for the control of hydraulic structures in sewer systems

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Sage, D.; Kayal, S.; Jeanbourquin, D.; Rossi, L.

    2009-04-01

    During rain events, the total amount of the wastewater/storm-water mixture cannot be treated in the wastewater treatment plant; the overflowed water goes directly into the environment (lakes, rivers, streams) via devices called combined sewers overflows (CSOs). This water is untreated and is recognized as an important source of pollution. In most cases, the quantity of overflowed water is unknown due to high hydraulic turbulences during rain events; this quantity is often significant. For this reason, the monitoring of the water flow and the water level is of crucial environmental importance. Robust monitoring of sewer systems is a challenging task to achieve. Indeed, the environment inside sewers systems is inherently harsh and hostile: constant humidity of 100%, fast and large water level changes, corrosive atmosphere, presence of gas, difficult access, solid debris inside the flow. A flow monitoring based on traditional probes placed inside the water (such as Doppler flow meter) is difficult to conduct because of the solid material transported by the flow. Probes placed outside the flow such as ultrasonic water level probes are often used; however the measurement is generally done on only one particular point. Experience has shown that the water level in CSOs during rain events is far from being constant due to hydraulic turbulences. Thus, such probes output uncertain information. Moreover, a check of the data reliability is impossible to achieve. The HydroPix system proposes a novel approach to the monitoring of sewers based on video images, without contact with the water flow. The goal of this system is to provide a monitoring tool for wastewater system managers (end-users). The hardware was chosen in order to suit the harsh conditions of sewers system: Cameras are 100% waterproof and corrosion-resistant; Infra-red LED illumination systems are used (waterproof, low power consumption); A waterproof case contains the registration and communication system. The monitoring software has the following requirements: visual analysis of particular hydraulic behavior, automatic vision-based flow measurements, automatic alarm system for particular events (overflows, risk of flooding, etc), database for data management (images, events, measurements, etc.), ability to be controlled remotely. The software is implemented in modular server/client architecture under LabVIEW development system. We have conducted conclusive in situ tests in various sewers configurations (CSOs, storm-water sewerage, WWTP); they have shown the ability of the HydroPix to perform accurate monitoring of hydraulic structures. Visual information demonstrated a better understanding of the flow behavior in complex and difficult environment.

  12. Analysis of the Capacity Potential of Current Day and Novel Configurations for New York's John F. Kennedy Airport

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia; Tamburro, Ralph; Lee, Paul

    2016-01-01

    In 2015, a series of systems analysis studies were conducted on John F. Kennedy Airport in New York (NY) in a collaborative effort between NASA and the Port Authority of New York and New Jersey (PANYNJ). This work was performed to build a deeper understanding of NY airspace and operations to determine the improvements possible through operational changes with tools currently available, and where new technology is required for additional improvement. The analysis was conducted using tool-based mathematical analyses, video inspection and evaluation using recorded arrival/departure/surface traffic captured by the Aerobahn tool (used by Kennedy Airport for surface metering), and aural data archives available publically through the web to inform the video segments. A discussion of impacts of trajectory and operational choices on capacity is presented, including runway configuration and usage (parallel, converging, crossing, shared, independent, staggered), arrival and departure route characteristics (fix sharing, merges, splits), and how compression of traffic is staged. The authorization in March of 2015 for New York to use reduced spacing under the Federal Aviation Administration (FAA) Wake Turbulence Recategorization (RECAT) also offers significant capacity benefit for New York airports when fully transitioned to the new spacing requirements, and the impact of these changes for New York is discussed. Arrival and departure capacity results are presented for each of the current day Kennedy Airport configurations. While the tools allow many variations of user-selected conditions, the analysis for these studies used arrival-priority, no-winds, additional safety buffer of 5% to the required minimum spacing, and a mix of traffic typical for Kennedy. Two additional "novel" configurations were evaluated. These configurations are of interest to Port Authority and to their airline customers, and are believed to offer near-term capacity benefit with minimal operational and equipage changes. One of these is the addition of an Optimized Profile Descent (OPD) route to runways 22L and 22R, and the other is the simultaneous use of 4 runways, which is not currently done at Kennedy. The background and configuration for each of these is described, and the capacity results are presented along with a discussion of drawbacks and enablers for each.

  13. External tank project new technology plan. [development of space shuttle external tank system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A production plan for the space shuttle external tank configuration is presented. The subjects discussed are: (1) the thermal protection system, (2) thermal coating application techniques, (3) manufacturing and tooling, (4) propulsion system configurations and components, (5) low temperature rotating and sliding joint seals, (6) lightning protection, and (7) nondestructive testing technology.

  14. Content and functional specifications for a standards-based multidisciplinary rounding tool to maintain continuity across acute and critical care

    PubMed Central

    Collins, Sarah; Hurley, Ann C; Chang, Frank Y; Illa, Anisha R; Benoit, Angela; Laperle, Sarah; Dykes, Patricia C

    2014-01-01

    Background Maintaining continuity of care (CoC) in the inpatient setting is dependent on aligning goals and tasks with the plan of care (POC) during multidisciplinary rounds (MDRs). A number of locally developed rounding tools exist, yet there is a lack of standard content and functional specifications for electronic tools to support MDRs within and across settings. Objective To identify content and functional requirements for an MDR tool to support CoC. Materials and methods We collected discrete clinical data elements (CDEs) discussed during rounds for 128 acute and critical care patients. To capture CDEs, we developed and validated an iPad-based observational tool based on informatics CoC standards. We observed 19 days of rounds and conducted eight group and individual interviews. Descriptive and bivariate statistics and network visualization were conducted to understand associations between CDEs discussed during rounds with a particular focus on the POC. Qualitative data were thematically analyzed. All analyses were triangulated. Results We identified the need for universal and configurable MDR tool views across settings and users and the provision of messaging capability. Eleven empirically derived universal CDEs were identified, including four POC CDEs: problems, plan, goals, and short-term concerns. Configurable POC CDEs were: rationale, tasks/‘to dos’, pending results and procedures, discharge planning, patient preferences, need for urgent review, prognosis, and advice/guidance. Discussion Some requirements differed between settings; yet, there was overlap between POC CDEs. Conclusions We recommend an initial list of 11 universal CDEs for continuity in MDRs across settings and 27 CDEs that can be configured to meet setting-specific needs. PMID:24081019

  15. 40 CFR 63.773 - Inspection and monitoring requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... secured in the non-diverting position using a car-seal or a lock-and-key type configuration, visually... value is greater. The temperature sensor shall be installed at a location in the combustion chamber downstream of the combustion zone. (B) For a catalytic vapor incinerator, a temperature monitoring device...

  16. DNS and Embedded DNS as Tools for Investigating Unsteady Heat Transfer Phenomena in Turbines

    NASA Technical Reports Server (NTRS)

    vonTerzi, Dominic; Bauer, H.-J.

    2010-01-01

    DNS is a powerful tool with high potential for investigating unsteady heat transfer and fluid flow phenomena, in particular for cases involving transition to turbulence and/or large coherent structures. - DNS of idealized configurations related to turbomachinery components is already possible. - For more realistic configurations and the inclusion of more effects, reduction of computational cost is key issue (e.g., hybrid methods). - Approach pursued here: Embedded DNS ( segregated coupling of DNS with LES and/or RANS). - Embedded DNS is an enabling technology for many studies. - Pre-transitional heat transfer and trailing-edge cutback film-cooling are good candidates for (embedded) DNS studies.

  17. High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.

    2017-01-01

    To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1

  18. A multimodal optical and electrochemical device for monitoring surface reactions: redox active surfaces in porous silicon Rugate filters.

    PubMed

    Ciampi, Simone; Guan, Bin; Darwish, Nadim A; Zhu, Ying; Reece, Peter J; Gooding, J Justin

    2012-12-21

    Herein, mesoporous silicon (PSi) is configured as a single sensing device that has dual readouts; as a photonic crystal sensor in a Rugate filter configuration, and as a high surface area porous electrode. The as-prepared PSi is chemically modified to provide it with stability in aqueous media and to allow for the subsequent coupling of chemical species, such as via Cu(I)-catalyzed cycloaddition reactions between 1-alkynes and azides ("click" reactions). The utility of the bimodal capabilities of the PSi sensor for monitoring surface coupling procedures is demonstrated by the covalent coupling of a ferrocene derivative, as well as by demonstrating ligand-exchange reactions (LER) at the PSi surface. Both types of reactions were monitored through optical reflectivity measurements, as well as electrochemically via the oxidation/reduction of the surface tethered redox species.

  19. Feasibility study of microwave modulation DIAL system for global CO II monitoring

    NASA Astrophysics Data System (ADS)

    Hirano, Yoshihito; Kameyama, Shumpei; Ueno, Shinichi; Sugimoto, Nobuo; Kimura, Toshiyoshi

    2006-12-01

    A new concept of DIAL (DIfferential Absorption Lidar) system for global CO II monitoring using microwave modulation is introduced. This system uses quasi-CW lights which are intensity modulated in microwave region and receives a backscattered light from the ground. In this system, ON/OFF wavelength laser lights are modulated with microwave frequencies, and received lights of two wavelengths are able to be discriminated by modulation frequencies in electrical signal domain. Higher sensitivity optical detection can be realized compared with the conventional microwave modulation lidar by using direct down conversion of modulation frequency. The system also has the function of ranging by using pseudo-random coding in modulation. Fiber-based optical circuit using wavelength region of 1.6 micron is a candidate for the system configuration. After the explanation of this configuration, feasibility study of this system on the application to global CO II monitoring is introduced.

  20. Condition monitoring of turning process using infrared thermography technique - An experimental approach

    NASA Astrophysics Data System (ADS)

    Prasad, Balla Srinivasa; Prabha, K. Aruna; Kumar, P. V. S. Ganesh

    2017-03-01

    In metal cutting machining, major factors that affect the cutting tool life are machine tool vibrations, tool tip/chip temperature and surface roughness along with machining parameters like cutting speed, feed rate, depth of cut, tool geometry, etc., so it becomes important for the manufacturing industry to find the suitable levels of process parameters for obtaining maintaining tool life. Heat generation in cutting was always a main topic to be studied in machining. Recent advancement in signal processing and information technology has resulted in the use of multiple sensors for development of the effective monitoring of tool condition monitoring systems with improved accuracy. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In the present work, a real time process monitoring method is explored using multiple sensors. It focuses on the development of a test bed for monitoring the tool condition in turning of AISI 316L steel by using both coated and uncoated carbide inserts. Proposed tool condition monitoring (TCM) is evaluated in the high speed turning using multiple sensors such as Laser Doppler vibrometer and infrared thermography technique. The results indicate the feasibility of using the dominant frequency of the vibration signals for the monitoring of high speed turning operations along with temperatures gradient. A possible correlation is identified in both regular and irregular cutting tool wear. While cutting speed and feed rate proved to be influential parameter on the depicted temperatures and depth of cut to be less influential. Generally, it is observed that lower heat and temperatures are generated when coated inserts are employed. It is found that cutting temperatures are gradually increased as edge wear and deformation developed.

  1. Updating Parameters for Volcanic Hazard Assessment Using Multi-parameter Monitoring Data Streams And Bayesian Belief Networks

    NASA Astrophysics Data System (ADS)

    Odbert, Henry; Aspinall, Willy

    2014-05-01

    Evidence-based hazard assessment at volcanoes assimilates knowledge about the physical processes of hazardous phenomena and observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We discuss the uncertainty of inferences, and how our method provides a route to formal propagation of uncertainties in hazard models. Such approaches provide an attractive route to developing an interface between volcano monitoring analyses and probabilistic hazard scenario analysis. We discuss the use of BBNs in hazard analysis as a tractable and traceable tool for fast, rational assimilation of complex, multi-parameter data sets in the context of timely volcanic crisis decision support.

  2. Combining Volcano Monitoring Timeseries Analyses with Bayesian Belief Networks to Update Hazard Forecast Estimates

    NASA Astrophysics Data System (ADS)

    Odbert, Henry; Hincks, Thea; Aspinall, Willy

    2015-04-01

    Volcanic hazard assessments must combine information about the physical processes of hazardous phenomena with observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We show how our method provides a route to formal propagation of uncertainties in hazard models. Such approaches provide an attractive route to developing an interface between volcano monitoring analyses and probabilistic hazard scenario analysis. We discuss the use of BBNs in hazard analysis as a tractable and traceable tool for fast, rational assimilation of complex, multi-parameter data sets in the context of timely volcanic crisis decision support.

  3. Localized Overheating Phenomena and Optimization of Spark-Plasma Sintering Tooling Design

    PubMed Central

    Giuntini, Diletta; Olevsky, Eugene A.; Garcia-Cardona, Cristina; Maximenko, Andrey L.; Yurlova, Maria S.; Haines, Christopher D.; Martin, Darold G.; Kapoor, Deepak

    2013-01-01

    The present paper shows the application of a three-dimensional coupled electrical, thermal, mechanical finite element macro-scale modeling framework of Spark Plasma Sintering (SPS) to an actual problem of SPS tooling overheating, encountered during SPS experimentation. The overheating phenomenon is analyzed by varying the geometry of the tooling that exhibits the problem, namely by modeling various tooling configurations involving sequences of disk-shape spacers with step-wise increasing radii. The analysis is conducted by means of finite element simulations, intended to obtain temperature spatial distributions in the graphite press-forms, including punches, dies, and spacers; to identify the temperature peaks and their respective timing, and to propose a more suitable SPS tooling configuration with the avoidance of the overheating as a final aim. Electric currents-based Joule heating, heat transfer, mechanical conditions, and densification are imbedded in the model, utilizing the finite-element software COMSOL™, which possesses a distinguishing ability of coupling multiple physics. Thereby the implementation of a finite element method applicable to a broad range of SPS procedures is carried out, together with the more specific optimization of the SPS tooling design when dealing with excessive heating phenomena. PMID:28811398

  4. Chirped fiber Bragg grating written in highly birefringent fiber in simultaneous strain and temperature monitoring.

    PubMed

    Bieda, Marcin S; Sobotka, Piotr; Woliński, Tomasz R

    2017-02-20

    A new sensor configuration is proposed for simultaneous strain and temperature monitoring in a composite material that is based on a chirped fiber Bragg grating (CFBG) written in a highly birefringent (HB) polarization-maintaining fiber. The sensor is designed in the reflective configuration in which the CFBG acts both as a reflector and a sensing element. Since CFBG and HB fiber induce changes in the state of polarization (SOP), interference between polarization modes in the reflected spectrum is observed and analyzed. We used a simple readout setup to enable fast, linear operation of strain sensing as well simultaneous strain and temperature measurements in the composite.

  5. Tool Wear Monitoring Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu

    A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.

  6. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Daneshmend, L. K.; Pak, H. A.

    1984-02-01

    On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

  7. Evidence-based Medicine Search: a customizable federated search engine.

    PubMed

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  8. Evidence-based Medicine Search: a customizable federated search engine

    PubMed Central

    Bracke, Paul J.; Howse, David K.; Keim, Samuel M.

    2008-01-01

    Purpose: This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. Brief Description: The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Outcomes/Conclusion: Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center. PMID:18379665

  9. Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X.; Buscheck, T. A.; Mansoor, K.

    We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO 2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MTmore » response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO 2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO 2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.« less

  10. Damage tolerance modeling and validation of a wireless sensory composite panel for a structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Talagani, Mohamad R.; Abdi, Frank; Saravanos, Dimitris; Chrysohoidis, Nikos; Nikbin, Kamran; Ragalini, Rose; Rodov, Irena

    2013-05-01

    The paper proposes the diagnostic and prognostic modeling and test validation of a Wireless Integrated Strain Monitoring and Simulation System (WISMOS). The effort verifies a hardware and web based software tool that is able to evaluate and optimize sensorized aerospace composite structures for the purpose of Structural Health Monitoring (SHM). The tool is an extension of an existing suite of an SHM system, based on a diagnostic-prognostic system (DPS) methodology. The goal of the extended SHM-DPS is to apply multi-scale nonlinear physics-based Progressive Failure analyses to the "as-is" structural configuration to determine residual strength, remaining service life, and future inspection intervals and maintenance procedures. The DPS solution meets the JTI Green Regional Aircraft (GRA) goals towards low weight, durable and reliable commercial aircraft. It will take advantage of the currently developed methodologies within the European Clean sky JTI project WISMOS, with the capability to transmit, store and process strain data from a network of wireless sensors (e.g. strain gages, FBGA) and utilize a DPS-based methodology, based on multi scale progressive failure analysis (MS-PFA), to determine structural health and to advice with respect to condition based inspection and maintenance. As part of the validation of the Diagnostic and prognostic system, Carbon/Epoxy ASTM coupons were fabricated and tested to extract the mechanical properties. Subsequently two composite stiffened panels were manufactured, instrumented and tested under compressive loading: 1) an undamaged stiffened buckling panel; and 2) a damaged stiffened buckling panel including an initial diamond cut. Next numerical Finite element models of the two panels were developed and analyzed under test conditions using Multi-Scale Progressive Failure Analysis (an extension of FEM) to evaluate the damage/fracture evolution process, as well as the identification of contributing failure modes. The comparisons between predictions and test results were within 10% accuracy.

  11. Novel tool wear monitoring method in milling difficult-to-machine materials using cutting chip formation

    NASA Astrophysics Data System (ADS)

    Zhang, P. P.; Guo, Y.; Wang, B.

    2017-05-01

    The main problems in milling difficult-to-machine materials are the high cutting temperature and rapid tool wear. However it is impossible to investigate tool wear in machining. Tool wear and cutting chip formation are two of the most important representations for machining efficiency and quality. The purpose of this paper is to develop the model of tool wear with cutting chip formation (width of chip and radian of chip) on difficult-to-machine materials. Thereby tool wear is monitored by cutting chip formation. A milling experiment on the machining centre with three sets cutting parameters was performed to obtain chip formation and tool wear. The experimental results show that tool wear increases gradually along with cutting process. In contrast, width of chip and radian of chip decrease. The model is developed by fitting the experimental data and formula transformations. The most of monitored errors of tool wear by the chip formation are less than 10%. The smallest error is 0.2%. Overall errors by the radian of chip are less than the ones by the width of chip. It is new way to monitor and detect tool wear by cutting chip formation in milling difficult-to-machine materials.

  12. Constructing Flexible, Configurable, ETL Pipelines for the Analysis of "Big Data" with Apache OODT

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Mattmann, C. A.; Ramirez, P.; Verma, R.; Zimdars, P. A.; Park, S.; Estrada, A.; Sumarlidason, A.; Gil, Y.; Ratnakar, V.; Krum, D.; Phan, T.; Meena, A.

    2013-12-01

    A plethora of open source technologies for manipulating, transforming, querying, and visualizing 'big data' have blossomed and matured in the last few years, driven in large part by recognition of the tremendous value that can be derived by leveraging data mining and visualization techniques on large data sets. One facet of many of these tools is that input data must often be prepared into a particular format (e.g.: JSON, CSV), or loaded into a particular storage technology (e.g.: HDFS) before analysis can take place. This process, commonly known as Extract-Transform-Load, or ETL, often involves multiple well-defined steps that must be executed in a particular order, and the approach taken for a particular data set is generally sensitive to the quantity and quality of the input data, as well as the structure and complexity of the desired output. When working with very large, heterogeneous, unstructured or semi-structured data sets, automating the ETL process and monitoring its progress becomes increasingly important. Apache Object Oriented Data Technology (OODT) provides a suite of complementary data management components called the Process Control System (PCS) that can be connected together to form flexible ETL pipelines as well as browser-based user interfaces for monitoring and control of ongoing operations. The lightweight, metadata driven middleware layer can be wrapped around custom ETL workflow steps, which themselves can be implemented in any language. Once configured, it facilitates communication between workflow steps and supports execution of ETL pipelines across a distributed cluster of compute resources. As participants in a DARPA-funded effort to develop open source tools for large-scale data analysis, we utilized Apache OODT to rapidly construct custom ETL pipelines for a variety of very large data sets to prepare them for analysis and visualization applications. We feel that OODT, which is free and open source software available through the Apache Software Foundation, is particularly well suited to developing and managing arbitrary large-scale ETL processes both for the simplicity and flexibility of its wrapper framework, as well as the detailed provenance information it exposes throughout the process. Our experience using OODT to manage processing of large-scale data sets in domains as diverse as radio astronomy, life sciences, and social network analysis demonstrates the flexibility of the framework, and the range of potential applications to a broad array of big data ETL challenges.

  13. Assessing the Impact of Advanced Satellite Observations in the NASA GEOS-5 Forecast System Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Gelaro, Ron; Liu, Emily; Sienkiewicz, Meta

    2011-01-01

    The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. In this talk, we present results from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. The tool has been running in various off-line configurations for some time, and is scheduled to run as a regular part of the real-time forecast suite beginning in autumn 20 I O. We focus on the impacts of the newest components of the satellite observing system, including AIRS, IASI and GPS. For AIRS and IASI, it is shown that the vast majority of the channels assimilated have systematic positive impacts (of varying magnitudes), although some channels degrade the forecast. Of the latter, most are moisture-sensitive or near-surface channels. The impact of GPS observations in the southern hemisphere is found to be a considerable overall benefit to the system. In addition, the spatial variability of observation impacts reveals coherent patterns of positive and negative impacts that may point to deficiencies in the use of certain observations over, for example, specific surface types. When performed in conjunction with selected observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies appears to pose a major challenge for optimizing the use of the current observational network and defining requirements for future observing systems.

  14. Ultrasonic Device for Assessing the Quality of a Wire Crimp

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Perey, Daniel F. (Inventor); Cramer, Karl E. (Inventor)

    2015-01-01

    A system for determining the quality of an electrical wire crimp between a wire and ferrule includes an ultrasonically equipped crimp tool (UECT) configured to transmit an ultrasonic acoustic wave through a wire and ferrule, and a signal processor in communication with the UECT. The signal processor includes a signal transmitting module configured to transmit the ultrasonic acoustic wave via an ultrasonic transducer, signal receiving module configured to receive the ultrasonic acoustic wave after it passes through the wire and ferrule, and a signal analysis module configured to identify signal differences between the ultrasonic waves. The signal analysis module is then configured to compare the signal differences attributable to the wire crimp to a baseline, and to provide an output signal if the signal differences deviate from the baseline.

  15. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  16. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station.

    PubMed

    Boyd, Matthew T

    2017-06-01

    Three grid-connected monocrystalline silicon photovoltaic arrays have been instrumented with research-grade sensors on the Gaithersburg, MD campus of the National Institute of Standards and Technology (NIST). These arrays range from 73 kW to 271 kW and have different tilts, orientations, and configurations. Irradiance, temperature, wind, and electrical measurements at the arrays are recorded, and images are taken of the arrays to monitor shading and capture any anomalies. A weather station has also been constructed that includes research-grade instrumentation to measure all standard meteorological quantities plus additional solar irradiance spectral bands, full spectrum curves, and directional components using multiple irradiance sensor technologies. Reference photovoltaic (PV) modules are also monitored to provide comprehensive baseline measurements for the PV arrays. Images of the whole sky are captured, along with images of the instrumentation and reference modules to document any obstructions or anomalies. Nearly, all measurements at the arrays and weather station are sampled and saved every 1s, with monitoring having started on Aug. 1, 2014. This report describes the instrumentation approach used to monitor the performance of these photovoltaic systems, measure the meteorological quantities, and acquire the images for use in PV performance and weather monitoring and computer model validation.

  17. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station

    PubMed Central

    Boyd, Matthew T.

    2017-01-01

    Three grid-connected monocrystalline silicon photovoltaic arrays have been instrumented with research-grade sensors on the Gaithersburg, MD campus of the National Institute of Standards and Technology (NIST). These arrays range from 73 kW to 271 kW and have different tilts, orientations, and configurations. Irradiance, temperature, wind, and electrical measurements at the arrays are recorded, and images are taken of the arrays to monitor shading and capture any anomalies. A weather station has also been constructed that includes research-grade instrumentation to measure all standard meteorological quantities plus additional solar irradiance spectral bands, full spectrum curves, and directional components using multiple irradiance sensor technologies. Reference photovoltaic (PV) modules are also monitored to provide comprehensive baseline measurements for the PV arrays. Images of the whole sky are captured, along with images of the instrumentation and reference modules to document any obstructions or anomalies. Nearly, all measurements at the arrays and weather station are sampled and saved every 1s, with monitoring having started on Aug. 1, 2014. This report describes the instrumentation approach used to monitor the performance of these photovoltaic systems, measure the meteorological quantities, and acquire the images for use in PV performance and weather monitoring and computer model validation. PMID:28670044

  18. Central-Monitor Software Module

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron; Foster, Conrad

    2005-01-01

    One of the software modules of the emergency-vehicle traffic-light-preemption system of the two preceding articles performs numerous functions for the central monitoring subsystem. This module monitors the states of all units (vehicle transponders and intersection controllers): It provides real-time access to the phases of traffic and pedestrian lights, and maps the positions and states of all emergency vehicles. Most of this module is used for installation and configuration of units as they are added to the system. The module logs all activity in the system, thereby providing information that can be analyzed to minimize response times and optimize response strategies. The module can be used from any location within communication range of the system; with proper configuration, it can also be used via the Internet. It can be integrated into call-response centers, where it can be used for alerting emergency vehicles and managing their responses to specific incidents. A variety of utility subprograms provide access to any or all units for purposes of monitoring, testing, and modification. Included are "sniffer" utility subprograms that monitor incoming and outgoing data for accuracy and timeliness, and that quickly and autonomously shut off malfunctioning vehicle or intersection units.

  19. Collective Flows of 16O+16O Collisions with α-Clustering Configurations

    NASA Astrophysics Data System (ADS)

    Guo, Chen-Chen; He, Wan-Bing; Ma, Yu-Gang

    2017-08-01

    The main purpose of the present paper is to discuss whether or not the collective flows in heavy-ion collision at Fermi energy can be taken as a tool to investigate the cluster configuration in light nuclei. In practice, within an Extended Quantum Molecular Dynamics model, four $\\alpha$-clustering (linear chain, kite, square, and tetrahedron) configurations of $^{16}$O are employed in the initialization, $^{16}$O+$^{16}$O around Fermi energy (40 - 60 MeV$/$nucleon) with impact parameter 1 - 3 fm are simulated, the directed and elliptic flows are analyzed. It is found that collective flows are influenced by the different $\\alpha$-clustering configurations, and the directed flow of free protons is more sensitive to the initial cluster configuration than the elliptic flow. Nuclear reaction at Fermi energy can be taken a useful way to study cluster configuration in light nuclei.

  20. Temporal variations in the potential hydrological performance of extensive green roof systems

    NASA Astrophysics Data System (ADS)

    De-Ville, Simon; Menon, Manoj; Stovin, Virginia

    2018-03-01

    Existing literature provides contradictory information about variation in potential green roof hydrological performance over time. This study has evaluated a long-term hydrological monitoring record from a series of extensive green roof test beds to identify long-term evolutions and sub-annual (seasonal) variations in potential hydrological performance. Monitoring of nine differently-configured extensive green roof test beds took place over a period of 6 years in Sheffield, UK. Long-term evolutions and sub-annual trends in maximum potential retention performance were identified through physical monitoring of substrate field capacity over time. An independent evaluation of temporal variations in detention performance was undertaken through the fitting of reservoir-routing model parameters. Aggregation of the resulting retention and detention variations permitted the prediction of extensive green roof hydrological performance in response to a 1-in-30-year 1-h summer design storm for Sheffield, UK, which facilitated the comparison of multi and sub-annual hydrological performance variations. Sub-annual (seasonal) variation was found to be significantly greater than long-term evolution. Potential retention performance increased by up to 12% after 5-years, whilst the maximum sub-annual variation in potential retention was 27%. For vegetated roof configurations, a 4% long-term improvement was observed for detention performance, compared to a maximum 63% sub-annual variation. Consistent long-term reductions in detention performance were observed in unvegetated roof configurations, with a non-standard expanded-clay substrate experiencing a 45% reduction in peak attenuation over 5-years. Conventional roof configurations exhibit stable long-term hydrological performance, but are nonetheless subject to sub-annual variation.

  1. Midlevel Maternity Providers' Preferences of a Childbirth Monitoring Tool in Low-Income Health Units in Uganda.

    PubMed

    Balikuddembe, Michael S; Wakholi, Peter K; Tumwesigye, Nazarius M; Tylleskär, Thorkild

    2018-01-01

    A third of women in childbirth are inadequately monitored, partly due to the tools used. Some stakeholders assert that the current labour monitoring tools are not efficient and need improvement to become more relevant to childbirth attendants. The study objective was to explore the expectations of maternity service providers for a mobile childbirth monitoring tool in maternity facilities in a low-income country like Uganda. Semi-structured interviews of purposively selected midwives and doctors in rural-urban childbirth facilities in Uganda were conducted before thematic data analysis. The childbirth providers expected a tool that enabled fast and secure childbirth record storage and sharing. They desired a tool that would automatically and conveniently register patient clinical findings, and actively provide interactive clinical decision support on a busy ward. The tool ought to support agreed upon standards for good pregnancy outcomes but also adaptable to the patient and their difficult working conditions. The tool functionality should include clinical data management and real-time decision support to the midwives, while the non-functional attributes include versatility and security.

  2. Link monitor and control operator assistant: A prototype demonstrating semiautomated monitor and control

    NASA Technical Reports Server (NTRS)

    Lee, L. F.; Cooper, L. P.

    1993-01-01

    This article describes the approach, results, and lessons learned from an applied research project demonstrating how artificial intelligence (AI) technology can be used to improve Deep Space Network operations. Configuring antenna and associated equipment necessary to support a communications link is a time-consuming process. The time spent configuring the equipment is essentially overhead and results in reduced time for actual mission support operations. The NASA Office of Space Communications (Code O) and the NASA Office of Advanced Concepts and Technology (Code C) jointly funded an applied research project to investigate technologies which can be used to reduce configuration time. This resulted in the development and application of AI-based automated operations technology in a prototype system, the Link Monitor and Control Operator Assistant (LMC OA). The LMC OA was tested over the course of three months in a parallel experimental mode on very long baseline interferometry (VLBI) operations at the Goldstone Deep Space Communications Center. The tests demonstrated a 44 percent reduction in pre-calibration time for a VLBI pass on the 70-m antenna. Currently, this technology is being developed further under Research and Technology Operating Plan (RTOP)-72 to demonstrate the applicability of the technology to operations in the entire Deep Space Network.

  3. A configurable electronics system for the ESS-Bilbao beam position monitors

    NASA Astrophysics Data System (ADS)

    Muguira, L.; Belver, D.; Etxebarria, V.; Varnasseri, S.; Arredondo, I.; del Campo, M.; Echevarria, P.; Garmendia, N.; Feuchtwanger, J.; Jugo, J.; Portilla, J.

    2013-09-01

    A versatile and configurable system has been developed in order to monitorize the beam position and to meet all the requirements of the future ESS-Bilbao Linac. At the same time the design has been conceived to be open and configurable so that it could eventually be used in different kinds of accelerators, independent of the charged particle, with minimal change. The design of the Beam Position Monitors (BPMs) system includes a test bench both for button-type pick-ups (PU) and striplines (SL), the electronic units and the control system. The electronic units consist of two main parts. The first part is an Analog Front-End (AFE) unit where the RF signals are filtered, conditioned and converted to base-band. The second part is a Digital Front-End (DFE) unit which is based on an FPGA board where the base-band signals are sampled in order to calculate the beam position, the amplitude and the phase. To manage the system a Multipurpose Controller (MC) developed at ESSB has been used. It includes the FPGA management, the EPICS integration and Archiver Instances. A description of the system and a comparison between the performance of both PU and SL BPM designs measured with this electronics system are fully described and discussed.

  4. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac.

    PubMed

    Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen

    2016-04-01

    To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.

  5. Design of automation tools for management of descent traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Nedell, William

    1988-01-01

    The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.

  6. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen

    Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less

  7. NCCDS configuration management process improvement

    NASA Technical Reports Server (NTRS)

    Shay, Kathy

    1993-01-01

    By concentrating on defining and improving specific Configuration Management (CM) functions, processes, procedures, personnel selection/development, and tools, internal and external customers received improved CM services. Job performance within the section increased in both satisfaction and output. Participation in achieving major improvements has led to the delivery of consistent quality CM products as well as significant decreases in every measured CM metrics category.

  8. Development and Testing of a High School Business Game. Final Report.

    ERIC Educational Resources Information Center

    McNair, Douglas D.; West, Alfred P., Jr.

    A computer based business game to be used as a teaching tool in high school business-related courses was designed, developed, and tested. The game is constructed in modules that can be linked together in a variety of ways to achieve a different decision configuration for different class needs and a changing configuration over time to parallel the…

  9. Hydrogen Chemical Configuration and Thermal Stability in Tungsten Disulfide Nanoparticles Exposed to Hydrogen Plasma

    PubMed Central

    Laikhtman, Alex; Makrinich, Gennady; Sezen, Meltem; Yildizhan, Melike Mercan; Martinez, Jose I.; Dinescu, Doru; Prodana, Mariana; Enachescu, Marius; Alonso, Julio A.; Zak, Alla

    2017-01-01

    The chemical configuration and interaction mechanism of hydrogen adsorbed in inorganic nanoparticles of WS2 are investigated. Our recent approaches of using hydrogen activated by either microwave or radiofrequency plasma dramatically increased the efficiency of its adsorption on the nanoparticles surface. In the current work we make an emphasis on elucidation of the chemical configuration of the adsorbed hydrogen. This configuration is of primary importance as it affects its adsorption stability and possibility of release. To get insight on the chemical configuration, we combined the experimental analysis methods with theoretical modeling based on the density functional theory (DFT). Micro-Raman spectroscopy was used as a primary tool to elucidate chemical bonding of hydrogen and to distinguish between chemi- and physisorption. Hydrogen adsorbed in molecular form (H2) was clearly identified in all the plasma-hydrogenated WS2 nanoparticles samples. It was shown that the adsorbed hydrogen is generally stable under high vacuum conditions at room temperature, which implies its stability at the ambient atmosphere. A DFT model was developed to simulate the adsorption of hydrogen in the WS2 nanoparticles. This model considers various adsorption sites and identifies the preferential locations of the adsorbed hydrogen in several WS2 structures, demonstrating good concordance between theory and experiment and providing tools for optimizing of hydrogen exposure conditions and the type of substrate materials. PMID:28596812

  10. Performance Evaluation of a Low-Cost, Real-Time Community Air Monitoring Station

    EPA Science Inventory

    The US EPA’s Village Green Project (VGP) is an example of using innovative technology to enable community-level low-cost real-time air pollution measurements. The VGP is an air monitoring system configured as a park bench located outside of a public library in Durham, NC. It co...

  11. New narrow-beam neutron spectrometer in complex monitoring system

    NASA Astrophysics Data System (ADS)

    Mikhalko, Evgeniya; Balabin, Yuriy; Maurchev, Evgeniy; Germanenko, Aleksey

    2018-03-01

    In the interaction of cosmic rays (CRs) with Earth's atmosphere, neutrons are formed in a wide range of energies: from thermal (E≈0.025 eV) to ultrarelativistic (E>1 GeV). To detect and study CRs, Polar Geophysical Institute (PGI) uses a complex monitoring system containing detectors of various configurations. The standard neutron monitor (NM) 18-NM-64 is sensitive to neutrons with energies E>50 MeV. The lead-free section of the neutron monitor (BSRM) detects neutrons with energies E≈(0.1/1) MeV. Also, for sharing with standard detectors, the Apatity NM station has developed and installed a neutron spectrometer with three energy channels and a particle reception angle of 15 degrees. The configuration of the device makes it possible to study the degree of anisotropy of the particle flux from different directions. We have obtained characteristics of the detector (response function and particle reception angle), as well as geometric dimensions through numerical simulation using the GEANT4 toolkit [Agostinelli et al., 2003]. During operation of the device, we collected database of observations and received preliminary results.

  12. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip

  13. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"

  14. Evaluation of helicopter noise due to b blade-vortex interaction for five tip configurations. [conducted in the Langley V/STOL tunnel

    NASA Technical Reports Server (NTRS)

    Hoad, D. R.

    1979-01-01

    The effect of tip shape modification on blade vortex interaction induced helicopter blade slap noise was investigated. Simulated flight and descent velocities which have been shown to produce blade slap were tested. Aerodynamic performance parameters of the rotor system were monitored to ensure properly matched flight conditions among the tip shapes. The tunnel was operated in the open throat configuration with treatment to improve the acoustic characteristics of the test chamber. Four promising tips were used along with a standard square tip as a baseline configuration. A detailed acoustic evaluation on the same rotor system of the relative applicability of the various tip configurations for blade slap noise reduction is provided.

  15. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting point. Usually this other configuration is provided by a researcher in the same research group or by a previous collaborator with whom there is an existing scientific relationship. Some efforts have been made at the university department level to create documentation but there is a wide diversity in the scope and purpose of this information. The consistent and comprehensive documentation enabled by PIMMS will enable the wider sharing of climate model data and configuration information. The PIMMS methodology assumes an initial effort to document standard model configurations. Once these descriptions have been created users need only describe the specific way in which their model configuration is different from the standard. Thus the documentation burden on the user is specific to the experiment they are performing and fits easily into the workflow of doing their science. PIMMS metadata is independent of data and as such is ideally suited for documenting model development. PIMMS provides a framework for sharing information about failed model configurations for which data are not kept, the negative results that don't appear in scientific literature. PIMMS is a UK project funded by JISC, The University of Reading, The University of Bristol and STFC.

  16. Tool simplifies machining of pipe ends for precision welding

    NASA Technical Reports Server (NTRS)

    Matus, S. T.

    1969-01-01

    Single tool prepares a pipe end for precision welding by simultaneously performing internal machining, end facing, and bevel cutting to specification standards. The machining operation requires only one milling adjustment, can be performed quickly, and produces the high quality pipe-end configurations required to ensure precision-welded joints.

  17. Unix Security Cookbook

    NASA Astrophysics Data System (ADS)

    Rehan, S. C.

    This document has been written to help Site Managers secure their Unix hosts from being compromised by hackers. I have given brief introductions to the security tools along with downloading, configuring and running information. I have also included a section on my recommendations for installing these security tools starting from an absolute minimum security requirement.

  18. Ab initio characterization of electron transfer coupling in photoinduced systems: generalized Mulliken-Hush with configuration-interaction singles.

    PubMed

    Chen, Hung-Cheng; Hsu, Chao-Ping

    2005-12-29

    To calculate electronic couplings for photoinduced electron transfer (ET) reactions, we propose and test the use of ab initio quantum chemistry calculation for excited states with the generalized Mulliken-Hush (GMH) method. Configuration-interaction singles (CIS) is proposed to model the locally excited (LE) and charge-transfer (CT) states. When the CT state couples with other high lying LE states, affecting coupling values, the image charge approximation (ICA), as a simple solvent model, can lower the energy of the CT state and decouple the undesired high-lying local excitations. We found that coupling strength is weakly dependent on many details of the solvent model, indicating the validity of the Condon approximation. Therefore, a trustworthy value can be obtained via this CIS-GMH scheme, with ICA used as a tool to improve and monitor the quality of the results. Systems we tested included a series of rigid, sigma-linked donor-bridge-acceptor compounds where "through-bond" coupling has been previously investigated, and a pair of molecules where "through-space" coupling was experimentally demonstrated. The calculated results agree well with experimentally inferred values in the coupling magnitudes (for both systems studied) and in the exponential distance dependence (for the through-bond series). Our results indicate that this new scheme can properly account for ET coupling arising from both through-bond and through-space mechanisms.

  19. Spacelab data processing facility (SLDPF) quality assurance (QA)/data accounting (DA) expert systems - Transition from prototypes to operational systems

    NASA Technical Reports Server (NTRS)

    Basile, Lisa

    1988-01-01

    The SLDPF is responsible for the capture, quality monitoring processing, accounting, and shipment of Spacelab and/or Attached Shuttle Payloads (ASP) telemetry data to various user facilities. Expert systems will aid in the performance of the quality assurance and data accounting functions of the two SLDPF functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). Prototypes were developed for each as independent efforts. The SIPS Knowledge System Prototype (KSP) used the commercial shell OPS5+ on an IBM PC/AT; the SOPS Expert System Prototype used the expert system shell CLIPS implemented on a Macintosh personal computer. Both prototypes emulate the duties of the respective QA/DA analysts based upon analyst input and predetermined mission criteria parameters, and recommended instructions and decisions governing the reprocessing, release, or holding for further analysis of data. These prototypes demonstrated feasibility and high potential for operational systems. Increase in productivity, decrease of tedium, consistency, concise historical records, and a training tool for new analyses were the principal advantages. An operational configuration, taking advantage of the SLDPF network capabilities, is under development with the expert systems being installed on SUN workstations. This new configuration in conjunction with the potential of the expert systems will enhance the efficiency, in both time and quality, of the SLDPF's release of Spacelab/AST data products.

  20. Spacelab data processing facility (SLDPF) Quality Assurance (QA)/Data Accounting (DA) expert systems: Transition from prototypes to operational systems

    NASA Technical Reports Server (NTRS)

    Basile, Lisa

    1988-01-01

    The SLDPF is responsible for the capture, quality monitoring processing, accounting, and shipment of Spacelab and/or Attached Shuttle Payloads (ASP) telemetry data to various user facilities. Expert systems will aid in the performance of the quality assurance and data accounting functions of the two SLDPF functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). Prototypes were developed for each as independent efforts. The SIPS Knowledge System Prototype (KSP) used the commercial shell OPS5+ on an IBM PC/AT; the SOPS Expert System Prototype used the expert system shell CLIPS implemented on a Macintosh personal computer. Both prototypes emulate the duties of the respective QA/DA analysts based upon analyst input and predetermined mission criteria parameters, and recommended instructions and decisions governing the reprocessing, release, or holding for further analysis of data. These prototypes demonstrated feasibility and high potential for operational systems. Increase in productivity, decrease of tedium, consistency, concise historial records, and a training tool for new analyses were the principal advantages. An operational configuration, taking advantage of the SLDPF network capabilities, is under development with the expert systems being installed on SUN workstations. This new configuration in conjunction with the potential of the expert systems will enhance the efficiency, in both time and quality, of the SLDPF's release of Spacelab/AST data products.

  1. FE-XIII Infrared / FE-XIV Green Line Ratio Diagnostics (P55)

    NASA Astrophysics Data System (ADS)

    Srivastava, A. K.; et al.

    2006-11-01

    aks.astro.itbhu@gmail.com We consider the first 27-level atomic model of Fe XIII (5.9 < log Te < 6.4 K) to estimate its ground level populations, taking account of electron as well as proton collisional excitations and de-excitations, radiative cascades, radiative excitations and de-excitations. Radiative cascade is important but the effect of dilution factor is negligible at higher electron densities. The 3 P1-3P0 and 3P2-3P1 transitions in the ground configuration 3s2 3p2 of Fe XIII result in two forbidden coronal emission lines in the infrared region, namely 10747 Å and 10798 Å., while the 5303 Å green line is formed in the 3s2 3p 2 2 ground configuration of Fe XIV as a result of P3 / 2 - P1 / 2 magnetic dipole transition. The line-widths of appropriate pair of forbidden coronal emission lines observed simultaneously can be useful diagnostic tool to deduce temperature and non-thermal velocity in the large scale coronal structures using intensity ratios of the lines as the temperature signature, instead of assuming ion temperature to be equal to the electron temperature. Since the line intensity ratios IG5303/IIR10747 and IG5303/IIR10798 have very week density dependence, they are ideal monitors of temperature mapping in the solar corona.

  2. The Design of Modular Web-Based Collaboration

    NASA Astrophysics Data System (ADS)

    Intapong, Ploypailin; Settapat, Sittapong; Kaewkamnerdpong, Boonserm; Achalakul, Tiranee

    Online collaborative systems are popular communication channels as the systems allow people from various disciplines to interact and collaborate with ease. The systems provide communication tools and services that can be integrated on the web; consequently, the systems are more convenient to use and easier to install. Nevertheless, most of the currently available systems are designed according to some specific requirements and cannot be straightforwardly integrated into various applications. This paper provides the design of a new collaborative platform, which is component-based and re-configurable. The platform is called the Modular Web-based Collaboration (MWC). MWC shares the same concept as computer supported collaborative work (CSCW) and computer-supported collaborative learning (CSCL), but it provides configurable tools for online collaboration. Each tool module can be integrated into users' web applications freely and easily. This makes collaborative system flexible, adaptable and suitable for online collaboration.

  3. Scalable Node Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.

    2012-07-30

    Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to bemore » used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.« less

  4. Examining the Suitability of a Sparse In Situ Soil Moisture Monitoring Network for Assimilation into a Spatially Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, N.; Verhoest, N.; Pauwels, V. R. N.

    2015-12-01

    The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological data assimilation. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the finer temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typical small integration volume of in situ measurements and the often large spacing between monitoring locations. This causes only a small part of the modelling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically temporally non-dynamic. Therefore two questions can be raised. Do spatially sparse in situ soil moisture observations contain a sufficient data representativeness to successfully assimilate them into the largely unobserved spatial extent of a distributed hydrological model? And if so, how is this assimilation best performed? Consequently two important factors that can influence the success of assimilating in situ monitored soil moisture are the spatial configuration of the monitoring network and the applied assimilation algorithm. In this research the influence of those factors is examined by means of synthetic data-assimilation experiments. The study area is the ± 100 km² catchment of the Bellebeek in Flanders, Belgium. The influence of the spatial configuration is examined by varying the amount of locations and their position in the landscape. The latter is performed using several techniques including temporal stability analysis and clustering. Furthermore the observation depth is considered by comparing assimilation of surface layer (5 cm) and deeper layer (50 cm) observations. The impact of the assimilation algorithm is assessed by comparing the performance obtained with two well-known algorithms: Newtonian nudging and the Ensemble Kalman Filter.

  5. Evaluating the Effect of Integrated System Health Management on Mission Effectiveness

    DTIC Science & Technology

    2013-03-01

    Health Status, Fault Detection , IMS Commands «Needline» 110 B.6 OV-5a « O V -5 » a c t O V -5 [ O V -5 a...UAS to self- detect , isolate, and diagnose system health problems. Current flight avionics architectures may include lower level sub-system health ... monitoring or may isolate health monitoring functions to a black box configuration, but a vehicle-wide health monitoring information system has

  6. Mechanical System Analysis/Design Tool (MSAT) Quick Guide

    NASA Technical Reports Server (NTRS)

    Lee, HauHua; Kolb, Mark; Madelone, Jack

    1998-01-01

    MSAT is a unique multi-component multi-disciplinary tool that organizes design analysis tasks around object-oriented representations of configuration components, analysis programs and modules, and data transfer links between them. This creative modular architecture enables rapid generation of input stream for trade-off studies of various engine configurations. The data transfer links automatically transport output from one application as relevant input to the next application once the sequence is set up by the user. The computations are managed via constraint propagation - the constraints supplied by the user as part of any optimization module. The software can be used in the preliminary design stage as well as during the detail design of product development process.

  7. Aeroelastic Optimization of Generalized Tube and Wing Aircraft Concepts Using HCDstruct Version 2.0

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2017-01-01

    Major enhancements were made to the Higher-fidelity Conceptual Design and structural optimization (HCDstruct) tool developed at NASA Langley Research Center (LaRC). Whereas previous versions were limited to hybrid wing body (HWB) configurations, the current version of HCDstruct now supports the analysis of generalized tube and wing (TW) aircraft concepts. Along with significantly enhanced user input options for all air- craft configurations, these enhancements represent HCDstruct version 2.0. Validation was performed using a Boeing 737-200 aircraft model, for which primary structure weight estimates agreed well with available data. Additionally, preliminary analysis of the NASA D8 (ND8) aircraft concept was performed, highlighting several new features of the tool.

  8. Application research of Ganglia in Hadoop monitoring and management

    NASA Astrophysics Data System (ADS)

    Li, Gang; Ding, Jing; Zhou, Lixia; Yang, Yi; Liu, Lei; Wang, Xiaolei

    2017-03-01

    There are many applications of Hadoop System in the field of large data, cloud computing. The test bench of storage and application in seismic network at Earthquake Administration of Tianjin use with Hadoop system, which is used the open source software of Ganglia to operate and monitor. This paper reviews the function, installation and configuration process, application effect of operating and monitoring in Hadoop system of the Ganglia system. It briefly introduces the idea and effect of Nagios software monitoring Hadoop system. It is valuable for the industry in the monitoring system of cloud computing platform.

  9. Applicability of the Design Tool for Inventory and Monitoring (DTIM) and the Explore Sample Data Tool for the Assessment of Caribbean Forest Dynamics

    Treesearch

    Humfredo Marcano-Vega; Andrew Lister; Kevin Megown; Charles Scott

    2016-01-01

    There is a growing need within the insular Caribbean for technical assistance in planning forest-monitoring projects and data analysis. This paper gives an overview of software tools developed by the USDA Forest Service’s National Inventory and Monitoring Applications Center and the Remote Sensing Applications Center. We discuss their applicability in the efficient...

  10. Biomechanical and performance implications of weapon design: comparison of bullpup and conventional configurations.

    PubMed

    Stone, Richard T; Moeller, Brandon F; Mayer, Robert R; Rosenquist, Bryce; Van Ryswyk, Darin; Eichorn, Drew

    2014-06-01

    Shooter accuracy and stability were monitored while firing two bullpup and two conventional configuration rifles of the same caliber in order to determine if one style of weapon results in superior performance. Considerable debate exists among police and military professionals regarding the differences between conventional configuration weapons, where the magazine and action are located ahead of the trigger, and bullpup configuration, where they are located behind the trigger (closer to the user). To date, no published research has attempted to evaluate this question from a physical ergonomics standpoint, and the knowledge that one style might improve stability or result in superior performance is of interest to countless military, law enforcement, and industry experts. A live-fire evaluation of both weapon styles was performed using a total of 48 participants. Shooting accuracy and fluctuations in biomechanical stability (center of pressure) were monitored while subjects used the weapons to perform standard drills. The bullpup weapon designs were found to provide a significant advantage in accuracy and shooter stability, while subjects showed considerable preference toward the conventional weapons. Although many mechanical and maintenance issues must be considered before committing to a bullpup or conventional weapon system, it is clear in terms of basic human stability that the bullpup is the more advantageous configuration. Results can be used by competitive shooter, military, law enforcement, and industry experts while outfitting personnel with a weapon system that leads to superior performance.

  11. Electromagnetic Monitoring and Control of a Plurality of Nanosatellites

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I. (Inventor)

    2017-01-01

    A method for monitoring position of and controlling a second nanosatellite (NS) relative to a position of a first NS. Each of the first and second NSs has a rectangular or cubical configuration of independently activatable, current-carrying solenoids, each solenoid having an independent magnetic dipole moment vector, .mu.1 and .mu.2. A vector force F and a vector torque are expressed as linear or bilinear combinations of the first set and second set of magnetic moments, and a distance vector extending between the first and second NSs is estimated. Control equations are applied to estimate vectors, .mu.1 and .mu.2, required to move the NSs toward a desired NS configuration. This extends to control of N nanosatellites.

  12. A wearable neuro-feedback system with EEG-based mental status monitoring and transcranial electrical stimulation.

    PubMed

    Roh, Taehwan; Song, Kiseok; Cho, Hyunwoo; Shin, Dongjoo; Yoo, Hoi-Jun

    2014-12-01

    A wearable neuro-feedback system is proposed with a low-power neuro-feedback SoC (NFS), which supports mental status monitoring with encephalography (EEG) and transcranial electrical stimulation (tES) for neuro-modulation. Self-configured independent component analysis (ICA) is implemented to accelerate source separation at low power. Moreover, an embedded support vector machine (SVM) enables online source classification, configuring the ICA accelerator adaptively depending on the types of the decomposed components. Owing to the hardwired accelerating functions, the NFS dissipates only 4.45 mW to yield 16 independent components. For non-invasive neuro-modulation, tES stimulation up to 2 mA is implemented on the SoC. The NFS is fabricated in 130-nm CMOS technology.

  13. Configurational entropy as a tool to select a physical thick brane model

    NASA Astrophysics Data System (ADS)

    Chinaglia, M.; Cruz, W. T.; Correa, R. A. C.; de Paula, W.; Moraes, P. H. R. S.

    2018-04-01

    We analize braneworld scenarios via a configurational entropy (CE) formalism. Braneworld scenarios have drawn attention mainly due to the fact that they can explain the hierarchy problem and unify the fundamental forces through a symmetry breaking procedure. Those scenarios localize matter in a (3 + 1) hypersurface, the brane, which is inserted in a higher dimensional space, the bulk. Novel analytical braneworld models, in which the warp factor depends on a free parameter n, were recently released in the literature. In this article we will provide a way to constrain this parameter through the relation between information and dynamics of a system described by the CE. We demonstrate that in some cases the CE is an important tool in order to provide the most probable physical system among all the possibilities. In addition, we show that the highest CE is correlated to a tachyonic sector of the configuration, where the solutions for the corresponding model are dynamically unstable.

  14. Modular Analytical Multicomponent Analysis in Gas Sensor Aarrays

    PubMed Central

    Chaiyboun, Ali; Traute, Rüdiger; Kiesewetter, Olaf; Ahlers, Simon; Müller, Gerhard; Doll, Theodor

    2006-01-01

    A multi-sensor system is a chemical sensor system which quantitatively and qualitatively records gases with a combination of cross-sensitive gas sensor arrays and pattern recognition software. This paper addresses the issue of data analysis for identification of gases in a gas sensor array. We introduce a software tool for gas sensor array configuration and simulation. It concerns thereby about a modular software package for the acquisition of data of different sensors. A signal evaluation algorithm referred to as matrix method was used specifically for the software tool. This matrix method computes the gas concentrations from the signals of a sensor array. The software tool was used for the simulation of an array of five sensors to determine gas concentration of CH4, NH3, H2, CO and C2H5OH. The results of the present simulated sensor array indicate that the software tool is capable of the following: (a) identify a gas independently of its concentration; (b) estimate the concentration of the gas, even if the system was not previously exposed to this concentration; (c) tell when a gas concentration exceeds a certain value. A gas sensor data base was build for the configuration of the software. With the data base one can create, generate and manage scenarios and source files for the simulation. With the gas sensor data base and the simulation software an on-line Web-based version was developed, with which the user can configure and simulate sensor arrays on-line.

  15. Computer tools for systems engineering at LaRC

    NASA Technical Reports Server (NTRS)

    Walters, J. Milam

    1994-01-01

    The Systems Engineering Office (SEO) has been established to provide life cycle systems engineering support to Langley research Center projects. over the last two years, the computing market has been reviewed for tools which could enhance the effectiveness and efficiency of activities directed towards this mission. A group of interrelated applications have been procured, or are under development including a requirements management tool, a system design and simulation tool, and project and engineering data base. This paper will review the current configuration of these tools and provide information on future milestones and directions.

  16. A Job Monitoring and Accounting Tool for the LSF Batch System

    NASA Astrophysics Data System (ADS)

    Sarkar, Subir; Taneja, Sonia

    2011-12-01

    This paper presents a web based job monitoring and group-and-user accounting tool for the LSF Batch System. The user oriented job monitoring displays a simple and compact quasi real-time overview of the batch farm for both local and Grid jobs. For Grid jobs the Distinguished Name (DN) of the Grid users is shown. The overview monitor provides the most up-to-date status of a batch farm at any time. The accounting tool works with the LSF accounting log files. The accounting information is shown for a few pre-defined time periods by default. However, one can also compute the same information for any arbitrary time window. The tool already proved to be an extremely useful means to validate more extensive accounting tools available in the Grid world. Several sites have already been using the present tool and more sites running the LSF batch system have shown interest. We shall discuss the various aspects that make the tool essential for site administrators and end-users alike and outline the current status of development as well as future plans.

  17. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  18. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  19. Sub-bandage sensing system for remote monitoring of chronic wounds in healthcare

    NASA Astrophysics Data System (ADS)

    Hariz, Alex; Mehmood, Nasir; Voelcker, Nico

    2015-12-01

    Chronic wounds, such as venous leg ulcers, can be monitored non-invasively by using modern sensing devices and wireless technologies. The development of such wireless diagnostic tools may improve chronic wound management by providing evidence on efficacy of treatments being provided. In this paper we present a low-power portable telemetric system for wound condition sensing and monitoring. The system aims at measuring and transmitting real-time information of wound-site temperature, sub-bandage pressure and moisture level from within the wound dressing. The system comprises commercially available non-invasive temperature, moisture, and pressure sensors, which are interfaced with a telemetry device on a flexible 0.15 mm thick printed circuit material, making up a lightweight biocompatible sensing device. The real-time data obtained is transmitted wirelessly to a portable receiver which displays the measured values. The performance of the whole telemetric sensing system is validated on a mannequin leg using commercial compression bandages and dressings. A number of trials on a healthy human volunteer are performed where treatment conditions were emulated using various compression bandage configurations. A reliable and repeatable performance of the system is achieved under compression bandage and with minimal discomfort to the volunteer. The system is capable of reporting instantaneous changes in bandage pressure, moisture level and local temperature at wound site with average measurement resolutions of 0.5 mmHg, 3.0 %RH, and 0.2 °C respectively. Effective range of data transmission is 4-5 m in an open environment.

  20. Audio signal analysis for tool wear monitoring in sheet metal stamping

    NASA Astrophysics Data System (ADS)

    Ubhayaratne, Indivarie; Pereira, Michael P.; Xiang, Yong; Rolfe, Bernard F.

    2017-02-01

    Stamping tool wear can significantly degrade product quality, and hence, online tool condition monitoring is a timely need in many manufacturing industries. Even though a large amount of research has been conducted employing different sensor signals, there is still an unmet demand for a low-cost easy to set up condition monitoring system. Audio signal analysis is a simple method that has the potential to meet this demand, but has not been previously used for stamping process monitoring. Hence, this paper studies the existence and the significance of the correlation between emitted sound signals and the wear state of sheet metal stamping tools. The corrupting sources generated by the tooling of the stamping press and surrounding machinery have higher amplitudes compared to that of the sound emitted by the stamping operation itself. Therefore, a newly developed semi-blind signal extraction technique was employed as a pre-processing technique to mitigate the contribution of these corrupting sources. The spectral analysis results of the raw and extracted signals demonstrate a significant qualitative relationship between wear progression and the emitted sound signature. This study lays the basis for employing low-cost audio signal analysis in the development of a real-time industrial tool condition monitoring system.

  1. Development of a knowledge acquisition tool for an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Disbrow, J. D.; Duke, E. L.; Regenie, V. A.

    1986-01-01

    Two of the main issues in artificial intelligence today are knowledge acquisition dion and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. The knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use is discussed. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.

  2. Development of a knowledge acquisition tool for an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Disbrow, J. D.; Duke, E. L.; Regenie, V. A.

    1986-01-01

    Two of the main issues in artificial intelligence today are knowledge acquisition and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. This paper discusses the knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.

  3. Implementing Information Assurance - Beyond Process

    DTIC Science & Technology

    2009-01-01

    disabled or properly configured. Tools and scripts are available to expedite the configuration process on some platforms, For example, approved Windows...in the System Security Plan (SSP) or Information Security Plan (lSP). Any PPSs not required for operation by the system must be disabled , This...Services must be disabled , Implementing an 1M capability within the boundary carries many policy and documentation requirements. Usemame and passwords

  4. Configurational assignments of conformationally restricted bis-monoterpene hydroquinones: Utility in exploration of endangered plants

    Treesearch

    Joonseok Oh; John J. Bowling; Amar G. Chittiboyina; Robert J. Doerksen; Daneel Ferreira; Theodor D. Leininger; Mark T. Hamann

    2013-01-01

    Endangered plant species are an important resource for new chemistry. Lindera melissifolia is native to the Southeastern U.S. and scarcely populates the edges of lakes and ponds. Quantum mechanics (QM) used in combination with NMR/ECD is a powerful tool for the assignment of absolute configuration in lieu of X-ray crystallography. Methods: The EtOAc extract of L....

  5. DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)

    NASA Technical Reports Server (NTRS)

    Keith, B.

    1994-01-01

    Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.

  6. Improvement of Computer Software Quality through Software Automated Tools.

    DTIC Science & Technology

    1986-08-30

    information that are returned from the tools to the human user, and the forms in which these outputs are presented. Page 2 of 4 STAGE OF DEVELOPMENT: What... AUTOMIATED SOFTWARE TOOL MONITORING SYSTEM APPENDIX 2 2-1 INTRODUCTION This document and Automated Software Tool Monitoring Program (Appendix 1) are...t Output Output features provide links from the tool to both the human user and the target machine (where applicable). They describe the types

  7. Operational skill assessment of the IBI-MFC Ocean Forecasting System within the frame of the CMEMS.

    NASA Astrophysics Data System (ADS)

    Lorente Jimenez, Pablo; Garcia-Sotillo, Marcos; Amo-Balandron, Arancha; Aznar Lecocq, Roland; Perez Gomez, Begoña; Levier, Bruno; Alvarez-Fanjul, Enrique

    2016-04-01

    Since operational ocean forecasting systems (OOFSs) are increasingly used as tools to support high-stakes decision-making for coastal management, a rigorous skill assessment of model performance becomes essential. In this context, the IBI-MFC (Iberia-Biscay-Ireland Monitoring & Forecasting Centre) has been providing daily ocean model estimates and forecasts for the IBI regional seas since 2011, first in the frame of MyOcean projects and later as part of the Copernicus Marine Environment Monitoring Service (CMEMS). A comprehensive web validation tool named NARVAL (North Atlantic Regional VALidation) has been developed to routinely monitor IBI performance and to evaluate model's veracity and prognostic capabilities. Three-dimensional comparisons are carried out on a different time basis ('online mode' - daily verifications - and 'delayed mode' - for longer time periods -) using a broad variety of in-situ (buoys, tide-gauges, ARGO-floats, drifters and gliders) and remote-sensing (satellite and HF radars) observational sources as reference fields to validate against the NEMO model solution. Product quality indicators and meaningful skill metrics are automatically computed not only averaged over the entire IBI domain but also over specific sub-regions of particular interest from a user perspective (i.e. coastal or shelf areas) in order to determine IBI spatial and temporal uncertainty levels. A complementary aspect of NARVAL web tool is the intercomparison of different CMEMS forecast model solutions in overlapping areas. Noticeable efforts are in progress in order to quantitatively assess the quality and consistency of nested system outputs by setting up specific intercomparison exercises on different temporal and spatial scales, encompassing global configurations (CMEMS Global system), regional applications (NWS and MED ones) and local high-resolution coastal models (i.e. the PdE SAMPA system in the Gibraltar Strait). NARVAL constitutes a powerful approach to increase our knowledge on the IBI-MFC forecast system and aids us to inform CMEMS end users about the provided ocean forecasting products' confidence level by routinely delivering QUality Information Documents (QUIDs). It allows the detection of strengths and weaknesses in the modeling of several key physical processes and the understanding of potential sources of discrepancies in IBI predictions. Once the numerical model shortcomings are identified, potential improvements can be achieved thanks to reliable upgrades, making evolve IBI OOFS towards more refined and advanced versions.

  8. Electronic self-monitoring of mood using IT platforms in adult patients with bipolar disorder: A systematic review of the validity and evidence.

    PubMed

    Faurholt-Jepsen, Maria; Munkholm, Klaus; Frost, Mads; Bardram, Jakob E; Kessing, Lars Vedel

    2016-01-15

    Various paper-based mood charting instruments are used in the monitoring of symptoms in bipolar disorder. During recent years an increasing number of electronic self-monitoring tools have been developed. The objectives of this systematic review were 1) to evaluate the validity of electronic self-monitoring tools as a method of evaluating mood compared to clinical rating scales for depression and mania and 2) to investigate the effect of electronic self-monitoring tools on clinically relevant outcomes in bipolar disorder. A systematic review of the scientific literature, reported according to the Preferred Reporting items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines was conducted. MEDLINE, Embase, PsycINFO and The Cochrane Library were searched and supplemented by hand search of reference lists. Databases were searched for 1) studies on electronic self-monitoring tools in patients with bipolar disorder reporting on validity of electronically self-reported mood ratings compared to clinical rating scales for depression and mania and 2) randomized controlled trials (RCT) evaluating electronic mood self-monitoring tools in patients with bipolar disorder. A total of 13 published articles were included. Seven articles were RCTs and six were longitudinal studies. Electronic self-monitoring of mood was considered valid compared to clinical rating scales for depression in six out of six studies, and in two out of seven studies compared to clinical rating scales for mania. The included RCTs primarily investigated the effect of heterogeneous electronically delivered interventions; none of the RCTs investigated the sole effect of electronic mood self-monitoring tools. Methodological issues with risk of bias at different levels limited the evidence in the majority of studies. Electronic self-monitoring of mood in depression appears to be a valid measure of mood in contrast to self-monitoring of mood in mania. There are yet few studies on the effect of electronic self-monitoring of mood in bipolar disorder. The evidence of electronic self-monitoring is limited by methodological issues and by a lack of RCTs. Although the idea of electronic self-monitoring of mood seems appealing, studies using rigorous methodology investigating the beneficial as well as possible harmful effects of electronic self-monitoring are needed.

  9. Implementation of Simple and Functional Web Applications at the Alaska Volcano Observatory Remote Sensing Group

    NASA Astrophysics Data System (ADS)

    Skoog, R. A.

    2007-12-01

    Web pages are ubiquitous and accessible, but when compared to stand-alone applications they are limited in capability. The Alaska Volcano Observatory (AVO) Remote Sensing Group has implemented web pages and supporting server software that provide relatively advanced features to any user able to meet basic requirements. Anyone in the world with access to a modern web browser (such as Mozilla Firefox 1.5 or Internet Explorer 6) and reasonable internet connection can fully use the tools, with no software installation or configuration. This allows faculty, staff and students at AVO to perform many aspects of volcano monitoring from home or the road as easily as from the office. Additionally, AVO collaborators such as the National Weather Service and the Anchorage Volcanic Ash Advisory Center are able to use these web tools to quickly assess volcanic events. Capabilities of this web software include (1) ability to obtain accurate measured remote sensing data values on an semi- quantitative compressed image of a large area, (2) to view any data from a wide time range of data swaths, (3) to view many different satellite remote sensing spectral bands and combinations, to adjust color range thresholds, (4) and to export to KML files which are viewable virtual globes such as Google Earth. The technologies behind this implementation are primarily Javascript, PHP, and MySQL which are free to use and well documented, in addition to Terascan, a commercial software package used to extract data from level-0 data files. These technologies will be presented in conjunction with the techniques used to combine them into the final product used by AVO and its collaborators for operational volcanic monitoring.

  10. RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.

    PubMed

    Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang

    2014-01-01

    In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.

  11. Verification of a rapid mooring and foundation design tool

    DOE PAGES

    Weller, Sam D.; Hardwick, Jon; Gomez, Steven; ...

    2018-02-15

    Marine renewable energy devices require mooring and foundation systems that suitable in terms of device operation and are also robust and cost effective. In the initial stages of mooring and foundation development a large number of possible configuration permutations exist. Filtering of unsuitable designs is possible using information specific to the deployment site (i.e. bathymetry, environmental conditions) and device (i.e. mooring and/or foundation system role and cable connection requirements). The identification of a final solution requires detailed analysis, which includes load cases based on extreme environmental statistics following certification guidance processes. Static and/or quasi-static modelling of the mooring and/or foundationmore » system serves as an intermediate design filtering stage enabling dynamic time-domain analysis to be focused on a small number of potential configurations. Mooring and foundation design is therefore reliant on logical decision making throughout this stage-gate process. The open-source DTOcean (Optimal Design Tools for Ocean Energy Arrays) Tool includes a mooring and foundation module, which automates the configuration selection process for fixed and floating wave and tidal energy devices. As far as the authors are aware, this is one of the first tools to be developed for the purpose of identifying potential solutions during the initial stages of marine renewable energy design. While the mooring and foundation module does not replace a full design assessment, it provides in addition to suitable configuration solutions, assessments in terms of reliability, economics and environmental impact. This article provides insight into the solution identification approach used by the module and features the verification of both the mooring system calculations and the foundation design using commercial software. Several case studies are investigated: a floating wave energy converter and several anchoring systems. It is demonstrated that the mooring and foundation module is able to provide device and/or site developers with rapid mooring and foundation design solutions to appropriate design criteria.« less

  12. Verification of a rapid mooring and foundation design tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weller, Sam D.; Hardwick, Jon; Gomez, Steven

    Marine renewable energy devices require mooring and foundation systems that suitable in terms of device operation and are also robust and cost effective. In the initial stages of mooring and foundation development a large number of possible configuration permutations exist. Filtering of unsuitable designs is possible using information specific to the deployment site (i.e. bathymetry, environmental conditions) and device (i.e. mooring and/or foundation system role and cable connection requirements). The identification of a final solution requires detailed analysis, which includes load cases based on extreme environmental statistics following certification guidance processes. Static and/or quasi-static modelling of the mooring and/or foundationmore » system serves as an intermediate design filtering stage enabling dynamic time-domain analysis to be focused on a small number of potential configurations. Mooring and foundation design is therefore reliant on logical decision making throughout this stage-gate process. The open-source DTOcean (Optimal Design Tools for Ocean Energy Arrays) Tool includes a mooring and foundation module, which automates the configuration selection process for fixed and floating wave and tidal energy devices. As far as the authors are aware, this is one of the first tools to be developed for the purpose of identifying potential solutions during the initial stages of marine renewable energy design. While the mooring and foundation module does not replace a full design assessment, it provides in addition to suitable configuration solutions, assessments in terms of reliability, economics and environmental impact. This article provides insight into the solution identification approach used by the module and features the verification of both the mooring system calculations and the foundation design using commercial software. Several case studies are investigated: a floating wave energy converter and several anchoring systems. It is demonstrated that the mooring and foundation module is able to provide device and/or site developers with rapid mooring and foundation design solutions to appropriate design criteria.« less

  13. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.

    1973-01-01

    A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.

  14. Analytic Patch Configuration (APC) gateway version 1.0 user's guide

    NASA Technical Reports Server (NTRS)

    Bingel, Bradford D.

    1990-01-01

    The Analytic Patch Configuration (APC) is an interactive software tool which translates aircraft configuration geometry files from one format into another. This initial release of the APC Gateway accommodates six formats: the four accepted APC formats (89f, 89fd, 89u, and 89ud), the PATRAN 2.x phase 1 neutral file format, and the Integrated Aerodynamic Analysis System (IAAS) General Geometry (GG) format. Written in ANSI FORTRAN 77 and completely self-contained, the APC Gateway is very portable and was already installed on CDC/NOS, VAX/VMS, SUN, SGI/IRIS, CONVEX, and GRAY hosts.

  15. Analysis and Tools for Improved Management of Connectionless and Connection-Oriented BLE Devices Coexistence

    PubMed Central

    Del Campo, Antonio; Cintioni, Lorenzo; Spinsante, Susanna; Gambi, Ennio

    2017-01-01

    With the introduction of low-power wireless technologies, like Bluetooth Low Energy (BLE), new applications are approaching the home automation, healthcare, fitness, automotive and consumer electronics markets. BLE devices are designed to maximize the battery life, i.e., to run for long time on a single coin-cell battery. In typical application scenarios of home automation and Ambient Assisted Living (AAL), the sensors that monitor relatively unpredictable and rare events should coexist with other sensors that continuously communicate health or environmental parameter measurements. The former usually work in connectionless mode, acting as advertisers, while the latter need a persistent connection, acting as slave nodes. The coexistence of connectionless and connection-oriented networks, that share the same central node, can be required to reduce the number of handling devices, thus keeping the network complexity low and limiting the packet’s traffic congestion. In this paper, the medium access management, operated by the central node, has been modeled, focusing on the scheduling procedure in both connectionless and connection-oriented communication. The models have been merged to provide a tool supporting the configuration design of BLE devices, during the network design phase that precedes the real implementation. The results highlight the suitability of the proposed tool: the ability to set the device parameters to allow us to keep a practical discovery latency for event-driven sensors and avoid undesired overlaps between scheduled scanning and connection phases due to bad management performed by the central node. PMID:28387724

  16. Analysis and Tools for Improved Management of Connectionless and Connection-Oriented BLE Devices Coexistence.

    PubMed

    Del Campo, Antonio; Cintioni, Lorenzo; Spinsante, Susanna; Gambi, Ennio

    2017-04-07

    With the introduction of low-power wireless technologies, like Bluetooth Low Energy (BLE), new applications are approaching the home automation, healthcare, fitness, automotive and consumer electronics markets. BLE devices are designed to maximize the battery life, i.e., to run for long time on a single coin-cell battery. In typical application scenarios of home automation and Ambient Assisted Living (AAL), the sensors that monitor relatively unpredictable and rare events should coexist with other sensors that continuously communicate health or environmental parameter measurements. The former usually work in connectionless mode, acting as advertisers, while the latter need a persistent connection, acting as slave nodes. The coexistence of connectionless and connection-oriented networks, that share the same central node, can be required to reduce the number of handling devices, thus keeping the network complexity low and limiting the packet's traffic congestion. In this paper, the medium access management, operated by the central node, has been modeled, focusing on the scheduling procedure in both connectionless and connection-oriented communication. The models have been merged to provide a tool supporting the configuration design of BLE devices, during the network design phase that precedes the real implementation. The results highlight the suitability of the proposed tool: the ability to set the device parameters to allow us to keep a practical discovery latency for event-driven sensors and avoid undesired overlaps between scheduled scanning and connection phases due to bad management performed by the central node.

  17. Planar rotational magnetic micromotors with integrated shaft encoder and magnetic rotor levitation

    NASA Technical Reports Server (NTRS)

    Guckel, Henry; Christenson, T. R.; Skrobis, K. J.; Klein, J.; Karnowsky, M.

    1994-01-01

    Deep x-ray lithography and electroplating may be combined to form a fabrication tool for micromechanical devices with large structural heights, to 500 micron, and extreme edge acuities, less than 0.1 micron-run-out per 100 micron of height. This process concept which originated in Germany as LIGA may be further extended by adding surface micromachining. This extension permits the fabrication of precision metal and plastic parts which may be assembled into three-dimensional micromechanical components and systems. The processing tool may be used to fabricate devices from ferromagnetic material such as nickel and nickel-iron alloys. These materials when properly heat treated exhibit acceptable magnetic behavior for current to flux conversion and marginal behavior for permanent magnet applications. The tool and materials have been tested via planar, magnetic, rotational micromotor fabrication. Three phase reluctance machines of the 6:4 configuration with 280 micron diameter rotors have been tested and analyzed. Stable rotational speeds to 34,000 rpm with output torques above 10 x 10(exp -9) N-m have been obtained. The behavior is monitored with integrated shaft encoders which are photodiodes which measure the rotor response. Magnetic levitation of the rotor via reluctance forces has been achieved and has reduced frictional torque losses to less than 1 percent of the available torque. The results indicate that high speed limits of these actuators are related to torque ripple. Hysteresis motors with magnetic bearings are under consideration and will produce high speed rotational machines with excellent sensor application potential.

  18. Verifying the secure setup of UNIX client/servers and detection of network intrusion

    NASA Astrophysics Data System (ADS)

    Feingold, Richard; Bruestle, Harry R.; Bartoletti, Tony; Saroyan, R. A.; Fisher, John M.

    1996-03-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today's global `Infosphere' presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to check on their security configuration. SPI's broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI's use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on the Ethernet broadcast Local Area Network segment and product transcripts of suspicious user connections. NID's retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.

  19. Customizable tool for ecological data entry, assessment, monitoring, and interpretation

    USDA-ARS?s Scientific Manuscript database

    The Database for Inventory, Monitoring and Assessment (DIMA) is a highly customizable tool for data entry, assessment, monitoring, and interpretation. DIMA is a Microsoft Access database that can easily be used without Access knowledge and is available at no cost. Data can be entered for common, nat...

  20. Analysis of the meal-dependent intragastric performance of a gastric-retentive tablet assessed by magnetic resonance imaging.

    PubMed

    Steingoetter, A; Kunz, P; Weishaupt, D; Mäder, K; Lengsfeld, H; Thumshirn, M; Boesiger, P; Fried, M; Schwizer, W

    2003-10-01

    Modern medical imaging modalities can trace labelled oral drug dosage forms in the gastrointestinal tract, and thus represent important tools for the evaluation of their in vivo performance. The application of gastric-retentive drug delivery systems to improve bioavailability and to avoid unwanted plasma peak concentrations of orally administered drugs is of special interest in clinical and pharmaceutical research. To determine the influence of meal composition and timing of tablet administration on the intragastric performance of a gastric-retentive floating tablet using magnetic resonance imaging in the sitting position. A tablet formulation was labelled with iron oxide particles as negative magnetic resonance contrast marker to allow the monitoring of the tablet position in the food-filled human stomach. Labelled tablet was administered, together with three different solid meals, to volunteers seated in a 0.5-T open-configuration magnetic resonance system. Volunteers were followed over a 4-h period. Labelled tablet was detectable in all subjects throughout the entire study. The tablet showed persistent good intragastric floating performance independent of meal composition. Unfavourable timing of tablet administration had a minor effect on the intragastric tablet residence time and floating performance. Magnetic resonance imaging can reliably monitor and analyse the in vivo performance of labelled gastric-retentive tablets in the human stomach.

  1. Measurement of in-plane elasticity of live cell layers using a pressure sensor embedded microfluidic device

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Han; Wang, Chien-Kai; Chen, Yu-An; Peng, Chien-Chung; Liao, Wei-Hao; Tung, Yi-Chung

    2016-11-01

    In various physiological activities, cells experience stresses along their in-plane direction when facing substrate deformation. Capability of continuous monitoring elasticity of live cell layers during a period is highly desired to investigate cell property variation during various transformations under normal or disease states. This paper reports time-lapsed measurement of live cell layer in-plane elasticity using a pressure sensor embedded microfluidic device. The sensor converts pressure-induced deformation of a flexible membrane to electrical signals. When cells are cultured on top of the membrane, flexural rigidity of the composite membrane increases and further changes the output electrical signals. In the experiments, human embryonic lung fibroblast (MRC-5) cells are cultured and analyzed to estimate the in-plane elasticity. In addition, the cells are treated with a growth factor to simulate lung fibrosis to study the effects of cell transformation on the elasticity variation. For comparison, elasticity measurement on the cells by atomic force microscopy (AFM) is also performed. The experimental results confirm highly anisotropic configuration and material properties of cells. Furthermore, the in-plane elasticity can be monitored during the cell transformation after the growth factor stimulation. Consequently, the developed microfluidic device provides a powerful tool to study physical properties of cells for fundamental biophysics and biomedical researches.

  2. Characterization of a Field Spectroradiometer for Unattended Vegetation Monitoring. Key Sensor Models and Impacts on Reflectance

    PubMed Central

    Pacheco-Labrador, Javier; Martín, M. Pilar

    2015-01-01

    Field spectroradiometers integrated in automated systems at Eddy Covariance (EC) sites are a powerful tool for monitoring and upscaling vegetation physiology and carbon and water fluxes. However, exposure to varying environmental conditions can affect the functioning of these sensors, especially if these cannot be completely insulated and stabilized. This can cause inaccuracy in the spectral measurements and hinder the comparison between data acquired at different sites. This paper describes the characterization of key sensor models in a double beam spectroradiometer necessary to calculate the Hemispherical-Conical Reflectance Factor (HCRF). Dark current, temperature dependence, non-linearity, spectral calibration and cosine receptor directional responses are modeled in the laboratory as a function of temperature, instrument settings, radiation measured or illumination angle. These models are used to correct the spectral measurements acquired continuously by the same instrument integrated outdoors in an automated system (AMSPEC-MED). Results suggest that part of the instrumental issues cancel out mutually or can be controlled by the instrument configuration, so that changes induced in HCFR reached about 0.05 at maximum. However, these corrections are necessary to ensure the inter-comparison of data with other ground or remote sensors and to discriminate instrumentally induced changes in HCRF from those related with vegetation physiology and directional effects. PMID:25679315

  3. Photonic Low Cost Micro-Sensor for in-Line Wear Particle Detection in Flowing Lube Oils.

    PubMed

    Mabe, Jon; Zubia, Joseba; Gorritxategi, Eneko

    2017-03-14

    The presence of microscopic particles in suspension in industrial fluids is often an early warning of latent or imminent failures in the equipment or processes where they are being used. This manuscript describes work undertaken to integrate different photonic principles with a micro- mechanical fluidic structure and an embedded processor to develop a fully autonomous wear debris sensor for in-line monitoring of industrial fluids. Lens-less microscopy, stroboscopic illumination, a CMOS imager and embedded machine vision technologies have been merged to develop a sensor solution that is able to detect and quantify the number and size of micrometric particles suspended in a continuous flow of a fluid. A laboratory test-bench has been arranged for setting up the configuration of the optical components targeting a static oil sample and then a sensor prototype has been developed for migrating the measurement principles to real conditions in terms of operating pressure and flow rate of the oil. Imaging performance is quantified using micro calibrated samples, as well as by measuring real used lubricated oils. Sampling a large fluid volume with a decent 2D spatial resolution, this photonic micro sensor offers a powerful tool at very low cost and compacted size for in-line wear debris monitoring.

  4. Photonic Low Cost Micro-Sensor for in-Line Wear Particle Detection in Flowing Lube Oils

    PubMed Central

    Mabe, Jon; Zubia, Joseba; Gorritxategi, Eneko

    2017-01-01

    The presence of microscopic particles in suspension in industrial fluids is often an early warning of latent or imminent failures in the equipment or processes where they are being used. This manuscript describes work undertaken to integrate different photonic principles with a micro- mechanical fluidic structure and an embedded processor to develop a fully autonomous wear debris sensor for in-line monitoring of industrial fluids. Lens-less microscopy, stroboscopic illumination, a CMOS imager and embedded machine vision technologies have been merged to develop a sensor solution that is able to detect and quantify the number and size of micrometric particles suspended in a continuous flow of a fluid. A laboratory test-bench has been arranged for setting up the configuration of the optical components targeting a static oil sample and then a sensor prototype has been developed for migrating the measurement principles to real conditions in terms of operating pressure and flow rate of the oil. Imaging performance is quantified using micro calibrated samples, as well as by measuring real used lubricated oils. Sampling a large fluid volume with a decent 2D spatial resolution, this photonic micro sensor offers a powerful tool at very low cost and compacted size for in-line wear debris monitoring. PMID:28335436

  5. The design of an intelligent human-computer interface for the test, control and monitor system

    NASA Technical Reports Server (NTRS)

    Shoaff, William D.

    1988-01-01

    The graphical intelligence and assistance capabilities of a human-computer interface for the Test, Control, and Monitor System at Kennedy Space Center are explored. The report focuses on how a particular commercial off-the-shelf graphical software package, Data Views, can be used to produce tools that build widgets such as menus, text panels, graphs, icons, windows, and ultimately complete interfaces for monitoring data from an application; controlling an application by providing input data to it; and testing an application by both monitoring and controlling it. A complete set of tools for building interfaces is described in a manual for the TCMS toolkit. Simple tools create primitive widgets such as lines, rectangles and text strings. Intermediate level tools create pictographs from primitive widgets, and connect processes to either text strings or pictographs. Other tools create input objects; Data Views supports output objects directly, thus output objects are not considered. Finally, a set of utilities for executing, monitoring use, editing, and displaying the content of interfaces is included in the toolkit.

  6. Enhanced methodology of focus control and monitoring on scanner tool

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Jen; Kim, Young Ki; Hao, Xueli; Gomez, Juan-Manuel; Tian, Ye; Kamalizadeh, Ferhad; Hanson, Justin K.

    2017-03-01

    As the demand of the technology node shrinks from 14nm to 7nm, the reliability of tool monitoring techniques in advanced semiconductor fabs to achieve high yield and quality becomes more critical. Tool health monitoring methods involve periodic sampling of moderately processed test wafers to detect for particles, defects, and tool stability in order to ensure proper tool health. For lithography TWINSCAN scanner tools, the requirements for overlay stability and focus control are very strict. Current scanner tool health monitoring methods include running BaseLiner to ensure proper tool stability on a periodic basis. The focus measurement on YIELDSTAR by real-time or library-based reconstruction of critical dimensions (CD) and side wall angle (SWA) has been demonstrated as an accurate metrology input to the control loop. The high accuracy and repeatability of the YIELDSTAR focus measurement provides a common reference of scanner setup and user process. In order to further improve the metrology and matching performance, Diffraction Based Focus (DBF) metrology enabling accurate, fast, and non-destructive focus acquisition, has been successfully utilized for focus monitoring/control of TWINSCAN NXT immersion scanners. The optimal DBF target was determined to have minimized dose crosstalk, dynamic precision, set-get residual, and lens aberration sensitivity. By exploiting this new measurement target design, 80% improvement in tool-to-tool matching, >16% improvement in run-to-run mean focus stability, and >32% improvement in focus uniformity have been demonstrated compared to the previous BaseLiner methodology. Matching <2.4 nm across multiple NXT immersion scanners has been achieved with the new methodology of set baseline reference. This baseline technique, with either conventional BaseLiner low numerical aperture (NA=1.20) mode or advanced illumination high NA mode (NA=1.35), has also been evaluated to have consistent performance. This enhanced methodology of focus control and monitoring on multiple illumination conditions, opens an avenue to significantly reduce Focus-Exposure Matrix (FEM) wafer exposure for new product/layer best focus (BF) setup.

  7. Dietary Adherence Monitoring Tool for Free-living, Controlled Feeding Studies

    USDA-ARS?s Scientific Manuscript database

    Objective: To devise a dietary adherence monitoring tool for use in controlled human feeding trials involving free-living study participants. Methods: A scoring tool was devised to measure and track dietary adherence for an 8-wk randomized trial evaluating the effects of two different dietary patter...

  8. Automatic aeroponic irrigation system based on Arduino’s platform

    NASA Astrophysics Data System (ADS)

    Montoya, A. P.; Obando, F. A.; Morales, J. G.; Vargas, G.

    2017-06-01

    The recirculating hydroponic culture techniques, as aeroponics, has several advantages over traditional agriculture, aimed to improve the efficiently and environmental impact of agriculture. These techniques require continuous monitoring and automation for proper operation. In this work was developed an automatic monitored aeroponic-irrigation system based on the Arduino’s free software platform. Analog and digital sensors for measuring the temperature, flow and level of a nutrient solution in a real greenhouse were implemented. In addition, the pH and electric conductivity of nutritive solutions are monitored using the Arduino’s differential configuration. The sensor network, the acquisition and automation system are managed by two Arduinos modules in master-slave configuration, which communicate one each other wireless by Wi-Fi. Further, data are stored in micro SD memories and the information is loaded on a web page in real time. The developed device brings important agronomic information when is tested with an arugula culture (Eruca sativa Mill). The system also could be employ as an early warning system to prevent irrigation malfunctions.

  9. Guidelines and standard procedures for continuous water-quality monitors: Site selection, field operation, calibration, record computation, and reporting

    USGS Publications Warehouse

    Wagner, Richard J.; Mattraw, Harold C.; Ritz, George F.; Smith, Brett A.

    2000-01-01

    The U.S. Geological Survey uses continuous water-quality monitors to assess variations in the quality of the Nation's surface water. A common system configuration for data collection is the four-parameter water-quality monitoring system, which collects temperature, specific conductance, dissolved oxygen, and pH data, although systems can be configured to measure other properties such as turbidity or chlorophyll. The sensors that are used to measure these water properties require careful field observation, cleaning, and calibration procedures, as well as thorough procedures for the computation and publication of final records. Data from sensors can be used in conjunction with collected samples and chemical analyses to estimate chemical loads. This report provides guidelines for site-selection considerations, sensor test methods, field procedures, error correction, data computation, and review and publication processes. These procedures have evolved over the past three decades, and the process continues to evolve with newer technologies.

  10. A Debate and Decision-Making Tool for Enhanced Learning

    ERIC Educational Resources Information Center

    López Garcia, Diego A.; Mateo Sanguino, Tomás de J.; Cortés Ancos, Estefania; Fernández de Viana González, Iñaki

    2016-01-01

    Debates have been used to develop critical thinking within teaching environments. Many learning activities are configured as working groups, which use debates to make decisions. Nevertheless, in a classroom debate, only a few students can participate; large work groups are similarly limited. Whilst the use of web tools would appear to offer a…

  11. On predicting monitoring system effectiveness

    NASA Astrophysics Data System (ADS)

    Cappello, Carlo; Sigurdardottir, Dorotea; Glisic, Branko; Zonta, Daniele; Pozzi, Matteo

    2015-03-01

    While the objective of structural design is to achieve stability with an appropriate level of reliability, the design of systems for structural health monitoring is performed to identify a configuration that enables acquisition of data with an appropriate level of accuracy in order to understand the performance of a structure or its condition state. However, a rational standardized approach for monitoring system design is not fully available. Hence, when engineers design a monitoring system, their approach is often heuristic with performance evaluation based on experience, rather than on quantitative analysis. In this contribution, we propose a probabilistic model for the estimation of monitoring system effectiveness based on information available in prior condition, i.e. before acquiring empirical data. The presented model is developed considering the analogy between structural design and monitoring system design. We assume that the effectiveness can be evaluated based on the prediction of the posterior variance or covariance matrix of the state parameters, which we assume to be defined in a continuous space. Since the empirical measurements are not available in prior condition, the estimation of the posterior variance or covariance matrix is performed considering the measurements as a stochastic variable. Moreover, the model takes into account the effects of nuisance parameters, which are stochastic parameters that affect the observations but cannot be estimated using monitoring data. Finally, we present an application of the proposed model to a real structure. The results show how the model enables engineers to predict whether a sensor configuration satisfies the required performance.

  12. Preliminary research on monitoring the durability of concrete subjected to sulfate attack with optical fibre Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Yue, Yanfei; Bai, Yun; Basheer, P. A. Muhammed; Boland, John J.; Wang, Jing Jing

    2013-04-01

    Formation of ettringite and gypsum from sulfate attack together with carbonation and chloride ingress have been considered as the most serious deterioration mechanisms of concrete structures. Although Electrical Resistance Sensors and Fibre Optic Chemical Sensors could be used to monitoring the latter two mechanisms in situ, currently there is no system for monitoring the deterioration mechanisms of sulfate attack and hence still needs to be developed. In this paper, a preliminary study was carried out to investigate the feasibility of monitoring the sulfate attack with optical fibre Raman spectroscopy through characterizing the ettringite and gypsum formed in deteriorated cementitious materials under an `optical fibre excitation + spectroscopy objective collection' configuration. Bench-mounted Raman spectroscopy analysis was also used to validate the spectrum obtained from the fibre-objective configuration. The results showed that the expected Raman bands of ettringite and gypsum in the sulfate attacked cement paste have been clearly identified by the optical fibre Raman spectroscopy and are in good agreement with those identified from bench-mounted Raman spectroscopy. Therefore, based on these preliminary results, there is a good potential of developing an optical fibre Raman spectroscopy-based system for monitoring the deterioration mechanisms of concrete subjected to the sulfate attack in the future.

  13. Co-scheduling of network resource provisioning and host-to-host bandwidth reservation on high-performance network and storage systems

    DOEpatents

    Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie

    2014-04-22

    A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.

  14. Generic robot architecture

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  15. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  16. Detecting periods of eating during free-living by tracking wrist motion.

    PubMed

    Dong, Yujie; Scisco, Jenna; Wilson, Mike; Muth, Eric; Hoover, Adam

    2014-07-01

    This paper is motivated by the growing prevalence of obesity, a health problem affecting over 500 million people. Measurements of energy intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require a considerable manual effort, leading to underreporting of consumption, noncompliance, and discontinued use over the long term. The purpose of this paper is to describe a new method that uses a watch-like configuration of sensors to continuously track wrist motion throughout the day and automatically detect periods of eating. Our method uses the novel idea that meals tend to be preceded and succeeded by the periods of vigorous wrist motion. We describe an algorithm that segments and classifies such periods as eating or noneating activities. We also evaluate our method on a large dataset (43 subjects, 449 total h of data, containing 116 periods of eating) collected during free-living. Our results show an accuracy of 81% for detecting eating at 1-s resolution in comparison to manually marked event logs of periods eating. These results indicate that vigorous wrist motion is a useful indicator for identifying the boundaries of eating activities, and that our method should prove useful in the continued development of body-worn sensor tools for monitoring energy intake.

  17. Measuring multi-configurational character by orbital entanglement

    NASA Astrophysics Data System (ADS)

    Stein, Christopher J.; Reiher, Markus

    2017-09-01

    One of the most critical tasks at the very beginning of a quantum chemical investigation is the choice of either a multi- or single-configurational method. Naturally, many proposals exist to define a suitable diagnostic of the multi-configurational character for various types of wave functions in order to assist this crucial decision. Here, we present a new orbital-entanglement-based multi-configurational diagnostic termed Zs(1). The correspondence of orbital entanglement and static (or non-dynamic) electron correlation permits the definition of such a diagnostic. We chose our diagnostic to meet important requirements such as well-defined limits for pure single-configurational and multi-configurational wave functions. The Zs(1) diagnostic can be evaluated from a partially converged, but qualitatively correct, and therefore inexpensive density matrix renormalisation group wave function as in our recently presented automated active orbital selection protocol. Its robustness and the fact that it can be evaluated at low cost make this diagnostic a practical tool for routine applications.

  18. Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin

    2015-10-25

    I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less

  19. Transfer Function Control for Biometric Monitoring System

    NASA Technical Reports Server (NTRS)

    Chmiel, Alan J. (Inventor); Grodinsky, Carlos M. (Inventor); Humphreys, Bradley T. (Inventor)

    2015-01-01

    A modular apparatus for acquiring biometric data may include circuitry operative to receive an input signal indicative of a biometric condition, the circuitry being configured to process the input signal according to a transfer function thereof and to provide a corresponding processed input signal. A controller is configured to provide at least one control signal to the circuitry to programmatically modify the transfer function of the modular system to facilitate acquisition of the biometric data.

  20. Fundamental Physics and Practical Applications of Electromagnetic Local Flow Control in High Speed Flows (Rutgers)

    DTIC Science & Technology

    2010-02-16

    field. Techniques utilizing this design use an open- loop control and no flow monitoring sensors are required. Conversely, reactive (or closed - loop ...and closed (dashed line) configuration. 38 closed configuration described above, the ambiguity in the critical limits of the transition...flow; a new vortex is then shed from the cavity leading edge, closing the feedback loop .[31] Open cavities with an L/D approximately greater than

  1. Enterprise tools to promote interoperability: MonitoringResources.org supports design and documentation of large-scale, long-term monitoringprograms

    NASA Astrophysics Data System (ADS)

    Weltzin, J. F.; Scully, R. A.; Bayer, J.

    2016-12-01

    Individual natural resource monitoring programs have evolved in response to different organizational mandates, jurisdictional needs, issues and questions. We are establishing a collaborative forum for large-scale, long-term monitoring programs to identify opportunities where collaboration could yield efficiency in monitoring design, implementation, analyses, and data sharing. We anticipate these monitoring programs will have similar requirements - e.g. survey design, standardization of protocols and methods, information management and delivery - that could be met by enterprise tools to promote sustainability, efficiency and interoperability of information across geopolitical boundaries or organizational cultures. MonitoringResources.org, a project of the Pacific Northwest Aquatic Monitoring Partnership, provides an on-line suite of enterprise tools focused on aquatic systems in the Pacific Northwest Region of the United States. We will leverage on and expand this existing capacity to support continental-scale monitoring of both aquatic and terrestrial systems. The current stakeholder group is focused on programs led by bureaus with the Department of Interior, but the tools will be readily and freely available to a broad variety of other stakeholders. Here, we report the results of two initial stakeholder workshops focused on (1) establishing a collaborative forum of large scale monitoring programs, (2) identifying and prioritizing shared needs, (3) evaluating existing enterprise resources, (4) defining priorities for development of enhanced capacity for MonitoringResources.org, and (5) identifying a small number of pilot projects that can be used to define and test development requirements for specific monitoring programs.

  2. Applying CBR to machine tool product configuration design oriented to customer requirements

    NASA Astrophysics Data System (ADS)

    Wang, Pengjia; Gong, Yadong; Xie, Hualong; Liu, Yongxian; Nee, Andrew Yehching

    2017-01-01

    Product customization is a trend in the current market-oriented manufacturing environment. However, deduction from customer requirements to design results and evaluation of design alternatives are still heavily reliant on the designer's experience and knowledge. To solve the problem of fuzziness and uncertainty of customer requirements in product configuration, an analysis method based on the grey rough model is presented. The customer requirements can be converted into technical characteristics effectively. In addition, an optimization decision model for product planning is established to help the enterprises select the key technical characteristics under the constraints of cost and time to serve the customer to maximal satisfaction. A new case retrieval approach that combines the self-organizing map and fuzzy similarity priority ratio method is proposed in case-based design. The self-organizing map can reduce the retrieval range and increase the retrieval efficiency, and the fuzzy similarity priority ratio method can evaluate the similarity of cases comprehensively. To ensure that the final case has the best overall performance, an evaluation method of similar cases based on grey correlation analysis is proposed to evaluate similar cases to select the most suitable case. Furthermore, a computer-aided system is developed using MATLAB GUI to assist the product configuration design. The actual example and result on an ETC series machine tool product show that the proposed method is effective, rapid and accurate in the process of product configuration. The proposed methodology provides a detailed instruction for the product configuration design oriented to customer requirements.

  3. ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  4. ICAROUS: Integrated Configurable Architecture for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This video describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the auspices of the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and autonomous detect and avoid functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  5. Reconstruction of phase maps from the configuration of phase singularities in two-dimensional manifolds.

    PubMed

    Herlin, Antoine; Jacquemet, Vincent

    2012-05-01

    Phase singularity analysis provides a quantitative description of spiral wave patterns observed in chemical or biological excitable media. The configuration of phase singularities (locations and directions of rotation) is easily derived from phase maps in two-dimensional manifolds. The question arises whether one can construct a phase map with a given configuration of phase singularities. The existence of such a phase map is guaranteed provided that the phase singularity configuration satisfies a certain constraint associated with the topology of the supporting medium. This paper presents a constructive mathematical approach to numerically solve this problem in the plane and on the sphere as well as in more general geometries relevant to atrial anatomy including holes and a septal wall. This tool can notably be used to create initial conditions with a controllable spiral wave configuration for cardiac propagation models and thus help in the design of computer experiments in atrial electrophysiology.

  6. Architecutres, Models, Algorithms, and Software Tools for Configurable Computing

    DTIC Science & Technology

    2000-03-06

    and J.G. Nash. The gated interconnection network for dynamic programming. Plenum, 1988 . [18] Ju wook Jang, Heonchul Park, and Viktor K. Prasanna. A ...Sep. 1997. [2] C. Ebeling, D. C. Cronquist , P. Franklin and C. Fisher, "RaPiD - A configurable computing architecture for compute-intensive...ABSTRACT (Maximum 200 words) The Models, Algorithms, and Architectures for Reconfigurable Computing (MAARC) project developed a sound framework for

  7. RotCFD Analysis of the AH-56 Cheyenne Hub Drag

    NASA Technical Reports Server (NTRS)

    Solis, Eduardo; Bass, Tal A.; Keith, Matthew D.; Oppenheim, Rebecca T.; Runyon, Bryan T.; Veras-Alba, Belen

    2016-01-01

    In 2016, the U.S. Army Aviation Development Directorate (ADD) conducted tests in the U.S. Army 7- by 10- Foot Wind Tunnel at NASA Ames Research Center of a nonrotating 2/5th-scale AH-56 rotor hub. The objective of the tests was to determine how removing the mechanical control gyro affected the drag. Data for the lift, drag, and pitching moment were recorded for the 4-bladed rotor hub in various hardware configurations, azimuth angles, and angles of attack. Numerical simulations of a selection of the configurations and orientations were then performed, and the results were compared with the test data. To generate the simulation results, the hardware configurations were modeled using Creo and Rhinoceros 5, three-dimensional surface modeling computer-aided design (CAD) programs. The CAD model was imported into Rotorcraft Computational Fluid Dynamics (RotCFD), a computational fluid dynamics (CFD) tool used for analyzing rotor flow fields. RotCFD simulation results were compared with the experimental results of three hardware configurations at two azimuth angles, two angles of attack, and with and without wind tunnel walls. The results help validate RotCFD as a tool for analyzing low-drag rotor hub designs for advanced high-speed rotorcraft concepts. Future work will involve simulating additional hub geometries to reduce drag or tailor to other desired performance levels.

  8. The control system of the 12-m medium-size telescope prototype: a test-ground for the CTA array control

    NASA Astrophysics Data System (ADS)

    Oya, I.; Anguner, E. A.; Behera, B.; Birsin, E.; Fuessling, M.; Lindemann, R.; Melkumyan, D.; Schlenstedt, S.; Schmidt, T.; Schwanke, U.; Sternberger, R.; Wegner, P.; Wiesand, S.

    2014-07-01

    The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.

  9. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  10. Evaluation of Automated Configuration Management Tools in Ada Programming Support Environments.

    DTIC Science & Technology

    1984-03-01

    82174.> 00. r11 2-. 4-42% wqC)e 0 000 CU 00 u 0 4 V to 4- 1 ( 1 ) ( 1 )Cl 0- onI I-n 0 4)U- (a 0 ca IVI 4-) 0 aI 0 - 4- 1 U .,4 4.1 934 0 a - 1 0 0 0 qva 41 04...AD--R14@ 982 EVALUATION OF AUTOMATED CONFIGURATION MANAGEMENT TOOLS 1 /2 IN ADA PROGRAMMING..(U) AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH...SCHOOL OF ENGI. M S ORNDORFF UNCLASSIFIED MAR 84 AFIT/GCS/EE/84M- 1 F/’G 5/i N EEEEEEEEEEomiE EEEEohEEmhEmhE II LA1111 J2. L I,..6 MICROCOPY RESOLUTION TEST

  11. Study of the joining of polycarbonate panels in butt joint configuration through friction stir welding

    NASA Astrophysics Data System (ADS)

    Astarita, Antonello; Boccarusso, Luca; Carrino, Luigi; Durante, Massimo; Minutolo, Fabrizio Memola Capece; Squillace, Antonino

    2018-05-01

    Polycarbonate sheets, 3 mm thick, were successfully friction stir welded in butt joint configuration. Aiming to study the feasibility of the process and the influence of the process parameters joints under different processing conditions, obtained by varying the tool rotational speed and the tool travel speed, were realized. Tensile tests were carried out to characterize the joints. Moreover the forces arising during the process were recorded and carefully studied. The experimental outcomes proved the feasibility of the process when the process parameters are properly set, joints retaining more than 70% of the UTS of the base material were produced. The trend of the forces was described and explained, the influence of the process parameters was also introduced.

  12. Developing enterprise tools and capacities for large-scale natural resource monitoring: A visioning workshop

    USGS Publications Warehouse

    Bayer, Jennifer M.; Weltzin, Jake F.; Scully, Rebecca A.

    2017-01-01

    Objectives of the workshop were: 1) identify resources that support natural resource monitoring programs working across the data life cycle; 2) prioritize desired capacities and tools to facilitate monitoring design and implementation; 3) identify standards and best practices that improve discovery, accessibility, and interoperability of data across programs and jurisdictions; and 4) contribute to an emerging community of practice focused on natural resource monitoring.

  13. Design tool for inventory and monitoring

    Treesearch

    Charles T. Scott; Renate Bush

    2009-01-01

    Forest survey planning typically begins by determining the area to be sampled and the attributes to be measured. All too often the data are collected but underutilized because they did not address the critical management questions. The Design Tool for Inventory and Monitoring (DTIM) is being developed by the National Inventory and Monitoring Applications Center in...

  14. Biological Effects–Based Tools for Monitoring Impacted Surface Waters in the Great Lakes: A Multiagency Program in Support of the Great Lakes Restoration Initiative

    EPA Science Inventory

    There is increasing demand for the implementation of effects-based monitoring and surveillance (EBMS) approaches in the Great Lakes Basin to complement traditional chemical monitoring. Herein, we describe an ongoing multiagency effort to develop and implement EBMS tools, particul...

  15. Web-Based Mathematics Progress Monitoring in Second Grade

    ERIC Educational Resources Information Center

    Salaschek, Martin; Souvignier, Elmar

    2014-01-01

    We examined a web-based mathematics progress monitoring tool for second graders. The tool monitors the learning progress of two competences, number sense and computation. A total of 414 students from 19 classrooms in Germany were checked every 3 weeks from fall to spring. Correlational analyses indicate that alternate-form reliability was adequate…

  16. On-line tool breakage monitoring of vibration tapping using spindle motor current

    NASA Astrophysics Data System (ADS)

    Li, Guangjun; Lu, Huimin; Liu, Gang

    2008-10-01

    Input current of driving motor has been employed successfully as monitoring the cutting state in manufacturing processes for more than a decade. In vibration tapping, however, the method of on-line monitoring motor electric current has not been reported. In this paper, a tap failure prediction method is proposed to monitor the vibration tapping process using the electrical current signal of the spindle motor. The process of vibration tapping is firstly described. Then the relationship between the torque of vibration tapping and the electric current of motor is investigated by theoretic deducing and experimental measurement. According to those results, a monitoring method of tool's breakage is proposed through monitoring the ratio of the current amplitudes during adjacent vibration tapping periods. Finally, a low frequency vibration tapping system with motor current monitoring is built up using a servo motor B-106B and its driver CR06. The proposed method has been demonstrated with experiment data of vibration tapping in titanic alloys. The result of experiments shows that the method, which can avoid the tool breakage and giving a few error alarms when the threshold of amplitude ratio is 1.2 and there is at least 2 times overrun among 50 adjacent periods, is feasible for tool breakage monitoring in the process of vibration tapping small thread holes.

  17. Optimal distribution of borehole geophones for monitoring CO2-injection-induced seismicity

    NASA Astrophysics Data System (ADS)

    Huang, L.; Chen, T.; Foxall, W.; Wagoner, J. L.

    2016-12-01

    The U.S. DOE initiative, National Risk Assessment Partnership (NRAP), aims to develop quantitative risk assessment methodologies for carbon capture, utilization and storage (CCUS). As part of tasks of the Strategic Monitoring Group of NRAP, we develop a tool for optimal design of a borehole geophones distribution for monitoring CO2-injection-induced seismicity. The tool consists of a number of steps, including building a geophysical model for a given CO2 injection site, defining target monitoring regions within CO2-injection/migration zones, generating synthetic seismic data, giving acceptable uncertainties in input data, and determining the optimal distribution of borehole geophones. We use a synthetic geophysical model as an example to demonstrate the capability our new tool to design an optimal/cost-effective passive seismic monitoring network using borehole geophones. The model is built based on the geologic features found at the Kimberlina CCUS pilot site located in southern San Joaquin Valley, California. This tool can provide CCUS operators with a guideline for cost-effective microseismic monitoring of geologic carbon storage and utilization.

  18. Approach to in-process tool wear monitoring in drilling: Application of Kalman filter theory

    NASA Astrophysics Data System (ADS)

    He, Ning; Zhang, Youzhen; Pan, Liangxian

    1993-05-01

    The two parameters often used in adaptive control, tool wear and wear rate, are the important factors affecting machinability. In this paper, it is attempted to use the modern cybernetics to solve the in-process tool wear monitoring problem by applying the Kalman filter theory to monitor drill wear quantitatively. Based on the experimental results, a dynamic model, a measuring model and a measurement conversion model suitable for Kalman filter are established. It is proved that the monitoring system possesses complete observability but does not possess complete controllability. A discriminant for selecting the characteristic parameters is put forward. The thrust force Fz is selected as the characteristic parameter in monitoring the tool wear by this discriminant. The in-process Kalman filter drill wear monitoring system composed of force sensor microphotography and microcomputer is well established. The results obtained by the Kalman filter, the common indirect measuring method and the real drill wear measured by the aid of microphotography are compared. The result shows that the Kalman filter has high precision of measurement and the real time requirement can be satisfied.

  19. Self-Monitoring Symptoms in Glaucoma: A Feasibility Study of a Web-Based Diary Tool

    PubMed Central

    McDonald, Leanne; Glen, Fiona C.; Taylor, Deanna J.

    2017-01-01

    Purpose. Glaucoma patients annually spend only a few hours in an eye clinic but spend more than 5000 waking hours engaged in everything else. We propose that patients could self-monitor changes in visual symptoms providing valuable between clinic information; we test the hypothesis that this is feasible using a web-based diary tool. Methods. Ten glaucoma patients with a range of visual field loss took part in an eight-week pilot study. After completing a series of baseline tests, volunteers were prompted to monitor symptoms every three days and complete a diary about their vision during daily life using a bespoke web-based diary tool. Response to an end of a study questionnaire about the usefulness of the exercise was a main outcome measure. Results. Eight of the 10 patients rated the monitoring scheme to be “valuable” or “very valuable.” Completion rate to items was excellent (96%). Themes from a qualitative synthesis of the diary entries related to behavioural aspects of glaucoma. One patient concluded that a constant focus on monitoring symptoms led to negative feelings. Conclusions. A web-based diary tool for monitoring self-reported glaucoma symptoms is practically feasible. The tool must be carefully designed to ensure participants are benefitting, and it is not increasing anxiety. PMID:28546876

  20. A remote condition monitoring system for wind-turbine based DG systems

    NASA Astrophysics Data System (ADS)

    Ma, X.; Wang, G.; Cross, P.; Zhang, X.

    2012-05-01

    In this paper, a remote condition monitoring system is proposed, which fundamentally consists of real-time monitoring modules on the plant side, a remote support centre and the communications between them. The paper addresses some of the key issues related on the monitoring system, including i) the implementation and configuration of a VPN connection, ii) an effective database system to be able to handle huge amount of monitoring data, and iii) efficient data mining techniques to convert raw data into useful information for plant assessment. The preliminary results have demonstrated that the proposed system is practically feasible and can be deployed to monitor the emerging new energy generation systems.

  1. Monitoring Object Library Usage and Changes

    NASA Technical Reports Server (NTRS)

    Owen, R. K.; Craw, James M. (Technical Monitor)

    1995-01-01

    The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.

  2. 78 FR 79044 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ... Proposed Rule Change to Offer Risk Management Tools Designed to Allow Member Organizations to Monitor and... of the Proposed Rule Change The Exchange proposes to offer risk management tools designed to allow... risk management tools designed to allow member organizations to monitor and address exposure to risk...

  3. Developing a 300C Analog Tool for EGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Normann, Randy

    2015-03-23

    This paper covers the development of a 300°C geothermal well monitoring tool for supporting future EGS (enhanced geothermal systems) power production. This is the first of 3 tools planed. This is an analog tool designed for monitoring well pressure and temperature. There is discussion on 3 different circuit topologies and the development of the supporting surface electronics and software. There is information on testing electronic circuits and component. One of the major components is the cable used to connect the analog tool to the surface.

  4. Spectroscopic Ellipsometry Studies of Thin Film a-Si:H Solar Cell Fabrication by Multichamber Deposition in the n-i-p Substrate Configuration

    NASA Astrophysics Data System (ADS)

    Dahal, Lila Raj

    Real time spectroscopic ellipsometry (RTSE), and ex-situ mapping spectroscopic ellipsometry (SE) are powerful characterization techniques capable of performance optimization and scale-up evaluation of thin film solar cells used in various photovoltaics technologies. These non-invasive optical probes employ multichannel spectral detection for high speed and provide high precision parameters that describe (i) thin film structure, such as layer thicknesses, and (ii) thin film optical properties, such as oscillator variables in analytical expressions for the complex dielectric function. These parameters are critical for evaluating the electronic performance of materials in thin film solar cells and also can be used as inputs for simulating their multilayer optical performance. In this Thesis, the component layers of thin film hydrogenated silicon (Si:H) solar cells in the n-i-p or substrate configuration on rigid and flexible substrate materials have been studied by RTSE and ex-situ mapping SE. Depositions were performed by magnetron sputtering for the metal and transparent conducting oxide contacts and by plasma enhanced chemical vapor deposition (PECVD) for the semiconductor doped contacts and intrinsic absorber layers. The motivations are first to optimize the thin film Si:H solar cell in n-i-p substrate configuration for single-junction small-area dot cells and ultimately to scale-up the optimized process to larger areas with minimum loss in device performance. Deposition phase diagrams for both i- and p -layers on 2" x 2" rigid borosilicate glass substrate were developed as functions of the hydrogen-to-silane flow ratio in PECVD. These phase diagrams were correlated with the performance parameters of the corresponding solar cells, fabricated in the Cr/Ag/ZnO/n/i/ p/ITO structure. In both cases, optimization was achieved when the layers were deposited in the protocrystalline phase. Identical solar cell structures were fabricated on 6" x 6" borosilicate glass with 256 cells followed by ex-situ mapping SE on each cell to achieve better statistics for solar cell optimization by correlating local structural parameters with solar cell parameters. Solar cells of similar structure were also fabricated on flexible polymer substrates in the roll-to-roll configuration. In this configuration as well, RTSE was demonstrated as an effective process monitoring and control tool for thin film photovoltaics.

  5. Preliminary Development of Real Time Usage-Phase Monitoring System for CNC Machine Tools with a Case Study on CNC Machine VMC 250

    NASA Astrophysics Data System (ADS)

    Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah

    2018-03-01

    The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.

  6. Neutronic analysis of the 1D and 1E banks reflux detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, A.

    1999-12-21

    Two H Canyon neutron monitoring systems for early detection of postulated abnormal reflux conditions in the Second Uranium Cycle 1E and 1D Mixer-Settle Banks have been designed and built. Monte Carlo neutron transport simulations using the general purpose, general geometry, n-particle MCNP code have been performed to model expected response of the monitoring systems to varying conditions.The confirmatory studies documented herein conclude that the 1E and 1D neutron monitoring systems are able to achieve adequate neutron count rates for various neutron source and detector configurations, thereby eliminating excessive integration count time. Neutron count rate sensitivity studies are also performed. Conversely,more » the transport studies concluded that the neutron count rates are statistically insensitive to nitric acid content in the aqueous region and to the transition region length. These studies conclude that the 1E and 1D neutron monitoring systems are able to predict the postulated reflux conditions for all examined perturbations in the neutron source and detector configurations. In the cases examined, the relative change in the neutron count rates due to postulated transitions from normal {sup 235}U concentration levels to reflux levels remain satisfactory detectable.« less

  7. Monitoring arrangement for vented nuclear fuel elements

    DOEpatents

    Campana, Robert J.

    1981-01-01

    In a nuclear fuel reactor core, fuel elements are arranged in a closely packed hexagonal configuration, each fuel element having diametrically opposed vents permitting 180.degree. rotation of the fuel elements to counteract bowing. A grid plate engages the fuel elements and forms passages for communicating sets of three, four or six individual vents with respective monitor lines in order to communicate vented radioactive gases from the fuel elements to suitable monitor means in a manner readily permitting detection of leakage in individual fuel elements.

  8. Method and apparatus for characterizing and enhancing the dynamic performance of machine tools

    DOEpatents

    Barkman, William E; Babelay, Jr., Edwin F

    2013-12-17

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include dynamic one axis positional accuracy of the machine tool, dynamic cross-axis stability of the machine tool, and dynamic multi-axis positional accuracy of the machine tool.

  9. Method and apparatus for characterizing and enhancing the functional performance of machine tools

    DOEpatents

    Barkman, William E; Babelay, Jr., Edwin F; Smith, Kevin Scott; Assaid, Thomas S; McFarland, Justin T; Tursky, David A; Woody, Bethany; Adams, David

    2013-04-30

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include workpiece surface finish, and the ability to generate chips of the desired length.

  10. Improving inflammatory arthritis management through tighter monitoring of patients and the use of innovative electronic tools

    PubMed Central

    van Riel, Piet; Combe, Bernard; Abdulganieva, Diana; Bousquet, Paola; Courtenay, Molly; Curiale, Cinzia; Gómez-Centeno, Antonio; Haugeberg, Glenn; Leeb, Burkhard; Puolakka, Kari; Ravelli, Angelo; Rintelen, Bernhard; Sarzi-Puttini, Piercarlo

    2016-01-01

    Treating to target by monitoring disease activity and adjusting therapy to attain remission or low disease activity has been shown to lead to improved outcomes in chronic rheumatic diseases such as rheumatoid arthritis and spondyloarthritis. Patient-reported outcomes, used in conjunction with clinical measures, add an important perspective of disease activity as perceived by the patient. Several validated PROs are available for inflammatory arthritis, and advances in electronic patient monitoring tools are helping patients with chronic diseases to self-monitor and assess their symptoms and health. Frequent patient monitoring could potentially lead to the early identification of disease flares or adverse events, early intervention for patients who may require treatment adaptation, and possibly reduced appointment frequency for those with stable disease. A literature search was conducted to evaluate the potential role of patient self-monitoring and innovative monitoring of tools in optimising disease control in inflammatory arthritis. Experience from the treatment of congestive heart failure, diabetes and hypertension shows improved outcomes with remote electronic self-monitoring by patients. In inflammatory arthritis, electronic self-monitoring has been shown to be feasible in patients despite manual disability and to be acceptable to older patients. Patients' self-assessment of disease activity using such methods correlates well with disease activity assessed by rheumatologists. This review also describes several remote monitoring tools that are being developed and used in inflammatory arthritis, offering the potential to improve disease management and reduce pressure on specialists. PMID:27933206

  11. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    NASA Astrophysics Data System (ADS)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  12. Evaluating the Potential of Commercial GIS for Accelerator Configuration Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.L. Larrieu; Y.R. Roblin; K. White

    2005-10-10

    The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less

  13. Adaptive sparse grid approach for the efficient simulation of pulsed eddy current testing inspections

    NASA Astrophysics Data System (ADS)

    Miorelli, Roberto; Reboud, Christophe

    2018-04-01

    Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.

  14. The MMPI Assistant: A Microcomputer Based Expert System to Assist in Interpreting MMPI Profiles

    PubMed Central

    Tanner, Barry A.

    1989-01-01

    The Assistant is an MS DOS program to aid clinical psychologists in interpreting the results of the Minnesota Multiphasic Personality Inventory (MMPI). Interpretive hypotheses are based on the professional literature and the author's experience. After scores are entered manually, the Assistant produces a hard copy which is intended for use by a psychologist knowledgeable about the MMPI. The rules for each hypothesis appear first on the monitor, and then in the printed output, followed by the patient's scores on the relevant scales, and narrative hypotheses for the scores. The data base includes hypotheses for 23 validity configurations, 45 two-point clinical codes, 10 high scoring single-point clinical scales, and 10 low scoring single-point clinical scales. The program can accelerate the production of test reports, while insuring that actuarial rules are not overlooked. It has been especially useful as a teaching tool with graduate students. The Assistant requires an IBM PC compatible with 128k available memory, DOS 2.x or higher, and a printer.

  15. Scientific and Technological Foundations for Scaling Production of Nanostructured Metals

    NASA Astrophysics Data System (ADS)

    Lowe, Terry C.; Davis, Casey F.; Rovira, Peter M.; Hayne, Mathew L.; Campbell, Gordon S.; Grzenia, Joel E.; Stock, Paige J.; Meagher, Rilee C.; Rack, Henry J.

    2017-05-01

    Severe Plastic Deformation (SPD) has been explored in a wide range of metals and alloys. However, there are only a few industrial scale implementations of SPD for commercial alloys. To demonstrate and evolve technology for producing ultrafine grain metals by SPD, a Nanostructured Metals Manufacturing Testbed (NMMT) has been established in Golden, Colorado. Machines for research scale and pilot scale Equal Channel Angular Pressing-Conform (ECAP-C) technology have been configured in the NMMT to systematically evaluate and evolve SPD processing and advance the foundational science and technology for manufacturing. We highlight the scientific and technological areas that are critical for scale up of continuous SPD of aluminum, copper, magnesium, titanium, and iron-based alloys. Key areas that we will address in this presentation include the need for comprehensive analysis of starting microstructures, data on operating deformation mechanisms, high pressure thermodynamics and phase transformation kinetics, tribological behaviors, temperature dependence of lubricant properties, adaptation of tolerances and shear intensity to match viscoplastic behaviors, real-time process monitoring, and mechanics of billet/tooling interactions.

  16. A Practical Approach to Governance and Optimization of Structured Data Elements.

    PubMed

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  17. The ALICE analysis train system

    NASA Astrophysics Data System (ADS)

    Zimmermann, Markus; ALICE Collaboration

    2015-05-01

    In the ALICE experiment hundreds of users are analyzing big datasets on a Grid system. High throughput and short turn-around times are achieved by a centralized system called the LEGO trains. This system combines analysis from different users in so-called analysis trains which are then executed within the same Grid jobs thereby reducing the number of times the data needs to be read from the storage systems. The centralized trains improve the performance, the usability for users and the bookkeeping in comparison to single user analysis. The train system builds upon the already existing ALICE tools, i.e. the analysis framework as well as the Grid submission and monitoring infrastructure. The entry point to the train system is a web interface which is used to configure the analysis and the desired datasets as well as to test and submit the train. Several measures have been implemented to reduce the time a train needs to finish and to increase the CPU efficiency.

  18. Healthwatch-2 System Overview

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Mosher, Marianne; Huff, Edward M.

    2004-01-01

    Healthwatch-2 (HW-2) is a research tool designed to facilitate the development and testing of in-flight health monitoring algorithms. HW-2 software is written in C/C++ and executes on an x86-based computer running the Linux operating system. The executive module has interfaces for collecting various signal data, such as vibration, torque, tachometer, and GPS. It is designed to perform in-flight time or frequency averaging based on specifications defined in a user-supplied configuration file. Averaged data are then passed to a user-supplied algorithm written as a Matlab function. This allows researchers a convenient method for testing in-flight algorithms. In addition to its in-flight capabilities, HW-2 software is also capable of reading archived flight data and processing it as if collected in-flight. This allows algorithms to be developed and tested in the laboratory before being flown. Currently HW-2 has passed its checkout phase and is collecting data on a Bell OH-58C helicopter operated by the U.S. Army at NASA Ames Research Center.

  19. The use of optical pyrometers in axial flow turbines

    NASA Astrophysics Data System (ADS)

    Sellers, R. R.; Przirembel, H. R.; Clevenger, D. H.; Lang, J. L.

    1989-07-01

    An optical pyrometer system that can be used to measure metal temperatures over an extended range of temperature has been developed. Real-time flame discrimination permits accurate operation in the gas turbine environment with high flame content. This versatile capability has been used in a number of ways. In experimental engines, a fixed angle pyrometer has been used for turbine health monitoring for the automatic test stand abort system. Turbine blade creep capability has been improved by tailoring the burner profile based on measured blade temperatures. Fixed and traversing pyrometers were used extensively during engine development to map blade surface temperatures in order to assess cooling effectiveness and identify optimum configurations. Portable units have been used in turbine field inspections. A new low temperature pyrometer is being used as a diagnostic tool in the alternate turbopump design for the Space Shuttle main engine. Advanced engine designs will incorporate pyrometers in the engine control system to limit operation to safe temperatures.

  20. Rapid Development of Custom Software Architecture Design Environments

    DTIC Science & Technology

    1999-08-01

    the tools themselves. This dissertation describes a new approach to capturing and using architectural design expertise in software architecture design environments...A language and tools are presented for capturing and encapsulating software architecture design expertise within a conceptual framework...of architectural styles and design rules. The design expertise thus captured is supported with an incrementally configurable software architecture

  1. Aggregating Concept Map Data to Investigate the Knowledge of Beginning CS Students

    ERIC Educational Resources Information Center

    Mühling, Andreas

    2016-01-01

    Concept maps have a long history in educational settings as a tool for teaching, learning, and assessing. As an assessment tool, they are predominantly used to extract the structural configuration of learners' knowledge. This article presents an investigation of the knowledge structures of a large group of beginning CS students. The investigation…

  2. Ruggedized downhole tool for real-time measurements and uses thereof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Ryan Falcone; Lindblom, Scott C.; Yelton, William G.

    The present invention relates to ruggedized downhole tools and sensors, as well as uses thereof. In particular, these tools can operate under extreme conditions and, therefore, allow for real-time measurements in geothermal reservoirs or other potentially harsh environments. One exemplary sensor includes a ruggedized ion selective electrode (ISE) for detecting tracer concentrations in real-time. In one embodiment, the ISE includes a solid, non-conductive potting material and an ion selective material, which are disposed in a temperature-resistant electrode body. Other electrode configurations, tools, and methods are also described.

  3. Apparatus for attaching a cleaning tool to a robotic manipulator

    DOEpatents

    Killian, M.A.; Zollinger, W.T.

    1991-01-01

    This invention is comprised of an apparatus for connecting a cleaning tool to a robotic manipulator so that the tool can be used in contaminated areas on horizontal, vertical and sloped surfaces. The apparatus comprises a frame and a handle, with casters on the frame to facilitate movement. The handle is pivotally and releasibly attached to the frame at a preselected position of a plurality of attachment positions. The apparatus is specifically configured for the KELLY VACUUM SYSTEM but can be modified for use with any standard mobile robot and cleaning tool.

  4. Apparatus for attaching a cleaning tool to a robotic manipulator

    DOEpatents

    Killian, Mark A.; Zollinger, W. Thor

    1992-01-01

    An apparatus for connecting a cleaning tool to a robotic manipulator so that the tool can be used in contaminated areas on horizontal, vertical and sloped surfaces. The apparatus comprises a frame and a handle, with casters on the frame to facilitate movement. The handle is pivotally and releasibly attached to the frame at a preselected position of a plurality of attachment positions. The apparatus is specifically configured for the KELLY VACUUM SYSTEM but can be modified for use with any standard mobile robot and cleaning tool.

  5. Apparatus for attaching a cleaning tool to a robotic manipulator

    DOEpatents

    Killian, M.A.; Zollinger, W.T.

    1992-09-22

    An apparatus is described for connecting a cleaning tool to a robotic manipulator so that the tool can be used in contaminated areas on horizontal, vertical and sloped surfaces. The apparatus comprises a frame and a handle, with casters on the frame to facilitate movement. The handle is pivotally and releasibly attached to the frame at a preselected position of a plurality of attachment positions. The apparatus is specifically configured for the Kelly Vacuum System but can be modified for use with any standard mobile robot and cleaning tool. 14 figs.

  6. Structural Health Monitoring Analysis for the Orbiter Wing Leading Edge

    NASA Technical Reports Server (NTRS)

    Yap, Keng C.

    2010-01-01

    This viewgraph presentation reviews Structural Health Monitoring Analysis for the Orbiter Wing Leading Edge. The Wing Leading Edge Impact Detection System (WLE IDS) and the Impact Analysis Process are also described to monitor WLE debris threats. The contents include: 1) Risk Management via SHM; 2) Hardware Overview; 3) Instrumentation; 4) Sensor Configuration; 5) Debris Hazard Monitoring; 6) Ascent Response Summary; 7) Response Signal; 8) Distribution of Flight Indications; 9) Probabilistic Risk Analysis (PRA); 10) Model Correlation; 11) Impact Tests; 12) Wing Leading Edge Modeling; 13) Ascent Debris PRA Results; and 14) MM/OD PRA Results.

  7. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.

  8. Optimized, Budget-constrained Monitoring Well Placement Using DREAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.

    Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less

  9. Optimized, Budget-constrained Monitoring Well Placement Using DREAM

    DOE PAGES

    Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.; ...

    2017-08-18

    Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less

  10. Teacher Progress Monitoring of Instructional and Behavioral Management Practices: An Evidence-Based Approach to Improving Classroom Practices

    ERIC Educational Resources Information Center

    Reddy, Linda A.; Dudek, Christopher M.

    2014-01-01

    In the era of teacher evaluation and effectiveness, assessment tools that identify and monitor educators' instruction and behavioral management practices are in high demand. The Classroom Strategies Scale (CSS) Observer Form is a multidimensional teacher progress monitoring tool designed to assess teachers' usage of instructional and behavioral…

  11. Magnetic Resonance Imaging of the Codman Microsensor Transducer Used for Intraspinal Pressure Monitoring: Findings From the Injured Spinal Cord Pressure Evaluation Study.

    PubMed

    Phang, Isaac; Mada, Marius; Kolias, Angelos G; Newcombe, Virginia F J; Trivedi, Rikin A; Carpenter, Adrian; Hawkes, Rob C; Papadopoulos, Marios C

    2016-05-01

    Laboratory and human study. To test the Codman Microsensor Transducer (CMT) in a cervical gel phantom. To test the CMT inserted to monitor intraspinal pressure in a patient with spinal cord injury. We recently introduced the technique of intraspinal pressure monitoring using the CMT to guide management of traumatic spinal cord injury [Werndle et al. Crit Care Med 2014;42:646]. This is analogous to intracranial pressure monitoring to guide management of patients with traumatic brain injury. It is unclear whether magnetic resonance imaging (MRI) of patients with spinal cord injury is safe with the intraspinal pressure CMT in situ. We measured the heating produced by the CMT placed in a gel phantom in various configurations. A 3-T MRI system was used with the body transmit coil and the spine array receive coil. A CMT was then inserted subdurally at the injury site in a patient who had traumatic spinal cord injury and MRI was performed at 1.5 T. In the gel phantom, heating of up to 5°C occurred with the transducer wire placed straight through the magnet bore. The heating was abolished when the CMT wire was coiled and passed away from the bore. We then tested the CMT in a patient with an American Spinal Injuries Association grade C cervical cord injury. The CMT wire was placed in the configuration that abolished heating in the gel phantom. Good-quality T1 and T2 images of the cord were obtained without neurological deterioration. The transducer remained functional after the MRI. Our data suggest that the CMT is MR conditional when used in the spinal configuration in humans. Data from a large patient group are required to confirm these findings. N/A.

  12. Aided generation of search interfaces to astronomical archives

    NASA Astrophysics Data System (ADS)

    Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.

  13. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Weatherbee, J. E.; Taylor, D. S.

    1972-01-01

    A deterministic digital simulation model is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Use of the model as a tool in configuring a minimum computer system for a typical mission is demonstrated. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources, i.e., the configuration derived is a minimal one. Other considerations such as increased reliability through the use of standby spares would be taken into account in the definition of a practical system for a given mission.

  14. Finite element modeling as a tool for predicting the fracture behavior of robocast scaffolds.

    PubMed

    Miranda, Pedro; Pajares, Antonia; Guiberteau, Fernando

    2008-11-01

    The use of finite element modeling to calculate the stress fields in complex scaffold structures and thus predict their mechanical behavior during service (e.g., as load-bearing bone implants) is evaluated. The method is applied to identifying the fracture modes and estimating the strength of robocast hydroxyapatite and beta-tricalcium phosphate scaffolds, consisting of a three-dimensional lattice of interpenetrating rods. The calculations are performed for three testing configurations: compression, tension and shear. Different testing orientations relative to the calcium phosphate rods are considered for each configuration. The predictions for the compressive configurations are compared to experimental data from uniaxial compression tests.

  15. FFI: What it is and what it can do for you

    Treesearch

    Duncan C. Lutes; MaryBeth Keifer; Nathan C. Benson; John F. Caratti

    2009-01-01

    A new monitoring tool called FFI (FEAT/FIREMON Integrated) has been developed to assist managers with collection, storage and analysis of ecological information. The tool was developed through the complementary integration of two fire effects monitoring systems commonly used in the United States: FIREMON and the Fire Ecology Assessment Tool (FEAT). FFI provides...

  16. PCD tool wear and its monitoring in machining tungsten

    NASA Astrophysics Data System (ADS)

    Wang, Lijiang; Zhang, Zhenlie; Sun, Qi; Liu, Pin

    The views of Chinese and foreign researchers are quite different as to whether or not polycrystalline diamond (PCD) tools can machine tungsten that is used in the aerospace and electronic industries. A study is presented that shows the possibility of machining tungsten, and a new method is developed for monitoring the tool wear in production.

  17. Incorporating Equipment Condition Assessment in Risk Monitors for Advanced Small Modular Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coble, Jamie B.; Coles, Garill A.; Meyer, Ryan M.

    2013-10-01

    Advanced small modular reactors (aSMRs) can complement the current fleet of large light-water reactors in the USA for baseload and peak demand power production and process heat applications (e.g., water desalination, shale oil extraction, hydrogen production). The day-to-day costs of aSMRs are expected to be dominated by operations and maintenance (O&M); however, the effect of diverse operating missions and unit modularity on O&M is not fully understood. These costs could potentially be reduced by optimized scheduling, with risk-informed scheduling of maintenance, repair, and replacement of equipment. Currently, most nuclear power plants have a “living” probabilistic risk assessment (PRA), which reflectsmore » the as-operated, as-modified plant and combine event probabilities with population-based probability of failure (POF) for key components. “Risk monitors” extend the PRA by incorporating the actual and dynamic plant configuration (equipment availability, operating regime, environmental conditions, etc.) into risk assessment. In fact, PRAs are more integrated into plant management in today’s nuclear power plants than at any other time in the history of nuclear power. However, population-based POF curves are still used to populate fault trees; this approach neglects the time-varying condition of equipment that is relied on during standard and non-standard configurations. Equipment condition monitoring techniques can be used to estimate the component POF. Incorporating this unit-specific estimate of POF in the risk monitor can provide a more accurate estimate of risk in different operating and maintenance configurations. This enhanced risk assessment will be especially important for aSMRs that have advanced component designs, which don’t have an available operating history to draw from, and often use passive design features, which present challenges to PRA. This paper presents the requirements and technical gaps for developing a framework to integrate unit-specific estimates of POF into risk monitors, resulting in enhanced risk monitors that support optimized operation and maintenance of aSMRs.« less

  18. 6th Annual CMMI Technology Conference and User Group

    DTIC Science & Technology

    2006-11-17

    Operationally Oriented; Customer Focused Proven Approach – Level of Detail Beginner Decision Table (DT) is a tabular representation with tailoring options to...written to reflect the experience of the author Software Engineering led the process charge in the ’80s – Used Flowcharts – CASE tools – “data...Postpo ned PCR. Verification Steps • EPG configuration audits • EPG configuration status reports Flowcharts and Entry, Task, Verification and eXit

  19. Experimental Stage Separation Tool Development in NASA Langley's Aerothermodynamics Laboratory

    NASA Technical Reports Server (NTRS)

    Murphy, Kelly J.; Scallion, William I.

    2005-01-01

    As part of the research effort at NASA in support of the stage separation and ascent aerothermodynamics research program, proximity testing of a generic bimese wing-body configuration was conducted in NASA Langley's Aerothermodynamics Laboratory in the 20-Inch Mach 6 Air Tunnel. The objective of this work is the development of experimental tools and testing methodologies to apply to hypersonic stage separation problems for future multi-stage launch vehicle systems. Aerodynamic force and moment proximity data were generated at a nominal Mach number of 6 over a small range of angles of attack. The generic bimese configuration was tested in a belly-to-belly and back-to-belly orientation at 86 relative proximity locations. Over 800 aerodynamic proximity data points were taken to serve as a database for code validation. Longitudinal aerodynamic data generated in this test program show very good agreement with viscous computational predictions. Thus a framework has been established to study separation problems in the hypersonic regime using coordinated experimental and computational tools.

  20. Providing Common Access Mechanisms for Dissimilar Network Interconnection Nodes

    DTIC Science & Technology

    1991-02-01

    Network management involves both maintaining adequate data transmission capabilities in the face of growing and changing needs and keeping the network...Display Only tools are able to obtain information from an IN or a set of INs and display this information, but are not able to change the...configuration or state of an IN. 2. Display and Control tools have the same capabilities as Display Only tools, but in addition are capable of changing the

Top