Runtime Performance Monitoring Tool for RTEMS System Software
NASA Astrophysics Data System (ADS)
Cho, B.; Kim, S.; Park, H.; Kim, H.; Choi, J.; Chae, D.; Lee, J.
2007-08-01
RTEMS is a commercial-grade real-time operating system that supports multi-processor computers. However, there are not many development tools for RTEMS. In this paper, we report new RTEMS-based runtime performance monitoring tool. We have implemented a light weight runtime monitoring task with an extension to the RTEMS APIs. Using our tool, software developers can verify various performance- related parameters during runtime. Our tool can be used during software development phase and in-orbit operation as well. Our implemented target agent is light weight and has small overhead using SpaceWire interface. Efforts to reduce overhead and to add other monitoring parameters are currently under research.
Use of Continuous Integration Tools for Application Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B
High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less
Perfmon2: a leap forward in performance monitoring
NASA Astrophysics Data System (ADS)
Jarp, S.; Jurga, R.; Nowak, A.
2008-07-01
This paper describes the software component, perfmon2, that is about to be added to the Linux kernel as the standard interface to the Performance Monitoring Unit (PMU) on common processors, including x86 (AMD and Intel), Sun SPARC, MIPS, IBM Power and Intel Itanium. It also describes a set of tools for doing performance monitoring in practice and details how the CERN openlab team has participated in the testing and development of these tools.
Web-based monitoring tools for Resistive Plate Chambers in the CMS experiment at CERN
NASA Astrophysics Data System (ADS)
Kim, M. S.; Ban, Y.; Cai, J.; Li, Q.; Liu, S.; Qian, S.; Wang, D.; Xu, Z.; Zhang, F.; Choi, Y.; Kim, D.; Goh, J.; Choi, S.; Hong, B.; Kang, J. W.; Kang, M.; Kwon, J. H.; Lee, K. S.; Lee, S. K.; Park, S. K.; Pant, L. M.; Mohanty, A. K.; Chudasama, R.; Singh, J. B.; Bhatnagar, V.; Mehta, A.; Kumar, R.; Cauwenbergh, S.; Costantini, S.; Cimmino, A.; Crucy, S.; Fagot, A.; Garcia, G.; Ocampo, A.; Poyraz, D.; Salva, S.; Thyssen, F.; Tytgat, M.; Zaganidis, N.; Doninck, W. V.; Cabrera, A.; Chaparro, L.; Gomez, J. P.; Gomez, B.; Sanabria, J. C.; Avila, C.; Ahmad, A.; Muhammad, S.; Shoaib, M.; Hoorani, H.; Awan, I.; Ali, I.; Ahmed, W.; Asghar, M. I.; Shahzad, H.; Sayed, A.; Ibrahim, A.; Aly, S.; Assran, Y.; Radi, A.; Elkafrawy, T.; Sharma, A.; Colafranceschi, S.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Iaselli, G.; Loddo, F.; Maggi, M.; Nuzzo, S.; Pugliese, G.; Radogna, R.; Venditti, R.; Verwilligen, P.; Benussi, L.; Bianco, S.; Piccolo, D.; Paolucci, P.; Buontempo, S.; Cavallo, N.; Merola, M.; Fabozzi, F.; Iorio, O. M.; Braghieri, A.; Montagna, P.; Riccardi, C.; Salvini, P.; Vitulo, P.; Vai, I.; Magnani, A.; Dimitrov, A.; Litov, L.; Pavlov, B.; Petkov, P.; Aleksandrov, A.; Genchev, V.; Iaydjiev, P.; Rodozov, M.; Sultanov, G.; Vutova, M.; Stoykova, S.; Hadjiiska, R.; Ibargüen, H. S.; Morales, M. I. P.; Bernardino, S. C.; Bagaturia, I.; Tsamalaidze, Z.; Crotty, I.
2014-10-01
The Resistive Plate Chambers (RPC) are used in the CMS experiment at the trigger level and also in the standard offline muon reconstruction. In order to guarantee the quality of the data collected and to monitor online the detector performance, a set of tools has been developed in CMS which is heavily used in the RPC system. The Web-based monitoring (WBM) is a set of java servlets that allows users to check the performance of the hardware during data taking, providing distributions and history plots of all the parameters. The functionalities of the RPC WBM monitoring tools are presented along with studies of the detector performance as a function of growing luminosity and environmental conditions that are tracked over time.
Process tool monitoring and matching using interferometry technique
NASA Astrophysics Data System (ADS)
Anberg, Doug; Owen, David M.; Mileham, Jeffrey; Lee, Byoung-Ho; Bouche, Eric
2016-03-01
The semiconductor industry makes dramatic device technology changes over short time periods. As the semiconductor industry advances towards to the 10 nm device node, more precise management and control of processing tools has become a significant manufacturing challenge. Some processes require multiple tool sets and some tools have multiple chambers for mass production. Tool and chamber matching has become a critical consideration for meeting today's manufacturing requirements. Additionally, process tools and chamber conditions have to be monitored to ensure uniform process performance across the tool and chamber fleet. There are many parameters for managing and monitoring tools and chambers. Particle defect monitoring is a well-known and established example where defect inspection tools can directly detect particles on the wafer surface. However, leading edge processes are driving the need to also monitor invisible defects, i.e. stress, contamination, etc., because some device failures cannot be directly correlated with traditional visualized defect maps or other known sources. Some failure maps show the same signatures as stress or contamination maps, which implies correlation to device performance or yield. In this paper we present process tool monitoring and matching using an interferometry technique. There are many types of interferometry techniques used for various process monitoring applications. We use a Coherent Gradient Sensing (CGS) interferometer which is self-referencing and enables high throughput measurements. Using this technique, we can quickly measure the topography of an entire wafer surface and obtain stress and displacement data from the topography measurement. For improved tool and chamber matching and reduced device failure, wafer stress measurements can be implemented as a regular tool or chamber monitoring test for either unpatterned or patterned wafers as a good criteria for improved process stability.
Knowledge-Acquisition Tool For Expert System
NASA Technical Reports Server (NTRS)
Disbrow, James D.; Duke, Eugene L.; Regenie, Victoria A.
1988-01-01
Digital flight-control systems monitored by computer program that evaluates and recommends. Flight-systems engineers for advanced, high-performance aircraft use knowlege-acquisition tool for expert-system flight-status monitor suppling interpretative data. Interpretative function especially important in time-critical, high-stress situations because it facilitates problem identification and corrective strategy. Conditions evaluated and recommendations made by ground-based engineers having essential knowledge for analysis and monitoring of performances of advanced aircraft systems.
Tools to manage the enterprise-wide picture archiving and communications system environment.
Lannum, L M; Gumpf, S; Piraino, D
2001-06-01
The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.
A New Network Modeling Tool for the Ground-based Nuclear Explosion Monitoring Community
NASA Astrophysics Data System (ADS)
Merchant, B. J.; Chael, E. P.; Young, C. J.
2013-12-01
Network simulations have long been used to assess the performance of monitoring networks to detect events for such purposes as planning station deployments and network resilience to outages. The standard tool has been the SAIC-developed NetSim package. With correct parameters, NetSim can produce useful simulations; however, the package has several shortcomings: an older language (FORTRAN), an emphasis on seismic monitoring with limited support for other technologies, limited documentation, and a limited parameter set. Thus, we are developing NetMOD (Network Monitoring for Optimal Detection), a Java-based tool designed to assess the performance of ground-based networks. NetMOD's advantages include: coded in a modern language that is multi-platform, utilizes modern computing performance (e.g. multi-core processors), incorporates monitoring technologies other than seismic, and includes a well-validated default parameter set for the IMS stations. NetMOD is designed to be extendable through a plugin infrastructure, so new phenomenological models can be added. Development of the Seismic Detection Plugin is being pursued first. Seismic location and infrasound and hydroacoustic detection plugins will follow. By making NetMOD an open-release package, it can hopefully provide a common tool that the monitoring community can use to produce assessments of monitoring networks and to verify assessments made by others.
New tools using the hardware performance monitor to help users tune programs on the Cray X-MP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Rudsinski, L.; Doak, J.
1991-09-25
The performance of a Cray system is highly dependent on the tuning techniques used by individuals on their codes. Many of our users were not taking advantage of the tuning tools that allow them to monitor their own programs by using the Hardware Performance Monitor (HPM). We therefore modified UNICOS to collect HPM data for all processes and to report Mflop ratings based on users, programs, and time used. Our tuning efforts are now being focused on the users and programs that have the best potential for performance improvements. These modifications and some of the more striking performance improvements aremore » described.« less
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
Langeveld, J G; de Haan, C; Klootwijk, M; Schilperoort, R P S
2012-01-01
Storm water separating manifolds in house connections have been introduced as a cost effective solution to disconnect impervious areas from combined sewers. Such manifolds have been applied by the municipality of Breda, the Netherlands. In order to investigate the performance of the manifolds, a monitoring technique (distributed temperature sensing or DTS) using fiber optic cables has been applied in the sewer system of Breda. This paper describes the application of DTS as a research tool in sewer systems. DTS proves to be a powerful tool to monitor the performance of (parts of) a sewer system in time and space. The research project showed that DTS is capable of monitoring the performance of house connections and identifying locations of inflow of both sewage and storm runoff. The research results show that the performance of storm water separating manifolds varies over time, thus making them unreliable.
NASA Astrophysics Data System (ADS)
Daneshmend, L. K.; Pak, H. A.
1984-02-01
On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.
DOT National Transportation Integrated Search
2017-03-01
The performance-planning tool developed as part of this project is intended for use with the guidebook for establishing and using rural performance based transportation system assessment, monitoring, planning, and programming to support the rural pla...
A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics
2011-01-01
A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics Seth S . Kessler1, Eric B. Flynn2, Christopher T...technology more accessible, and commercially practical. 1. INTRODUCTION Currently successful laboratory non- destructive testing and monitoring...PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES
ERIC Educational Resources Information Center
Chavez-Gibson, Sarah
2013-01-01
The purpose of this study is to exam in-depth, the Comprehensive, Powerful, Academic Database (CPAD), a data decision-making tool that determines and identifies students at-risk of dropping out of school, and how the CPAD assists administrators and teachers at an elementary campus to monitor progress, curriculum, and performance to improve student…
Enhanced methodology of focus control and monitoring on scanner tool
NASA Astrophysics Data System (ADS)
Chen, Yen-Jen; Kim, Young Ki; Hao, Xueli; Gomez, Juan-Manuel; Tian, Ye; Kamalizadeh, Ferhad; Hanson, Justin K.
2017-03-01
As the demand of the technology node shrinks from 14nm to 7nm, the reliability of tool monitoring techniques in advanced semiconductor fabs to achieve high yield and quality becomes more critical. Tool health monitoring methods involve periodic sampling of moderately processed test wafers to detect for particles, defects, and tool stability in order to ensure proper tool health. For lithography TWINSCAN scanner tools, the requirements for overlay stability and focus control are very strict. Current scanner tool health monitoring methods include running BaseLiner to ensure proper tool stability on a periodic basis. The focus measurement on YIELDSTAR by real-time or library-based reconstruction of critical dimensions (CD) and side wall angle (SWA) has been demonstrated as an accurate metrology input to the control loop. The high accuracy and repeatability of the YIELDSTAR focus measurement provides a common reference of scanner setup and user process. In order to further improve the metrology and matching performance, Diffraction Based Focus (DBF) metrology enabling accurate, fast, and non-destructive focus acquisition, has been successfully utilized for focus monitoring/control of TWINSCAN NXT immersion scanners. The optimal DBF target was determined to have minimized dose crosstalk, dynamic precision, set-get residual, and lens aberration sensitivity. By exploiting this new measurement target design, 80% improvement in tool-to-tool matching, >16% improvement in run-to-run mean focus stability, and >32% improvement in focus uniformity have been demonstrated compared to the previous BaseLiner methodology. Matching <2.4 nm across multiple NXT immersion scanners has been achieved with the new methodology of set baseline reference. This baseline technique, with either conventional BaseLiner low numerical aperture (NA=1.20) mode or advanced illumination high NA mode (NA=1.35), has also been evaluated to have consistent performance. This enhanced methodology of focus control and monitoring on multiple illumination conditions, opens an avenue to significantly reduce Focus-Exposure Matrix (FEM) wafer exposure for new product/layer best focus (BF) setup.
Prakash, Rangasamy; Krishnaraj, Vijayan; Zitoune, Redouane; Sheikh-Ahmad, Jamal
2016-01-01
Carbon fiber reinforced polymers (CFRPs) have found wide-ranging applications in numerous industrial fields such as aerospace, automotive, and shipping industries due to their excellent mechanical properties that lead to enhanced functional performance. In this paper, an experimental study on edge trimming of CFRP was done with various cutting conditions and different geometry of tools such as helical-, fluted-, and burr-type tools. The investigation involves the measurement of cutting forces for the different machining conditions and its effect on the surface quality of the trimmed edges. The modern cutting tools (router tools or burr tools) selected for machining CFRPs, have complex geometries in cutting edges and surfaces, and therefore a traditional method of direct tool wear evaluation is not applicable. An acoustic emission (AE) sensing was employed for on-line monitoring of the performance of router tools to determine the relationship between AE signal and length of machining for different kinds of geometry of tools. The investigation showed that the router tool with a flat cutting edge has better performance by generating lower cutting force and better surface finish with no delamination on trimmed edges. The mathematical modeling for the prediction of cutting forces was also done using Artificial Neural Network and Regression Analysis. PMID:28773919
Engine health monitoring systems: Tools for improved maintenance management in the 1980's
NASA Technical Reports Server (NTRS)
Kimball, J. C.
1981-01-01
The performance monitoring aspect of maintenance, characteristic of the engine health monitoring system are discussed. An overview of the system activities is presented and a summary of programs for improved monitoring in the 1980's are discussed.
Development of a knowledge acquisition tool for an expert system flight status monitor
NASA Technical Reports Server (NTRS)
Disbrow, J. D.; Duke, E. L.; Regenie, V. A.
1986-01-01
Two of the main issues in artificial intelligence today are knowledge acquisition dion and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. The knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use is discussed. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.
Development of a knowledge acquisition tool for an expert system flight status monitor
NASA Technical Reports Server (NTRS)
Disbrow, J. D.; Duke, E. L.; Regenie, V. A.
1986-01-01
Two of the main issues in artificial intelligence today are knowledge acquisition and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. This paper discusses the knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.
NASA Astrophysics Data System (ADS)
Zhang, P. P.; Guo, Y.; Wang, B.
2017-05-01
The main problems in milling difficult-to-machine materials are the high cutting temperature and rapid tool wear. However it is impossible to investigate tool wear in machining. Tool wear and cutting chip formation are two of the most important representations for machining efficiency and quality. The purpose of this paper is to develop the model of tool wear with cutting chip formation (width of chip and radian of chip) on difficult-to-machine materials. Thereby tool wear is monitored by cutting chip formation. A milling experiment on the machining centre with three sets cutting parameters was performed to obtain chip formation and tool wear. The experimental results show that tool wear increases gradually along with cutting process. In contrast, width of chip and radian of chip decrease. The model is developed by fitting the experimental data and formula transformations. The most of monitored errors of tool wear by the chip formation are less than 10%. The smallest error is 0.2%. Overall errors by the radian of chip are less than the ones by the width of chip. It is new way to monitor and detect tool wear by cutting chip formation in milling difficult-to-machine materials.
Behavioral Health Support of NASA Astronauts for International Space Station Missions
NASA Technical Reports Server (NTRS)
Sipes, Walter
2000-01-01
Two areas of focus for optimizing behavioral health and human performance during International Space Station missions are 1) sleep and circadian assessment and 2) behavioral medicine. The Mir experience provided the opportunity to examine the use and potential effectiveness of tools and procedures to support the behavioral health of the crew. The experience of NASA has shown that on-orbit performance can be better maintained if behavioral health, sleep, and circadian issues are effectively monitored and properly addressed. For example, schedules can be tailored based upon fatigue level of crews and other behavioral and cognitive indicators to maximize performance. Previous research and experience with long duration missions has resulted in the development and upgrade of tools used to monitor fatigue, stress, cognitive function, and behavioral health. Self-assessment and objective tools such as the Spaceflight Cognitive Assessment Tool have been developed and refined to effectively address behavioral medicine countermeasures in space.
Rinzler, Charles C.; Gray, William C.; Faircloth, Brian O.; Zediker, Mark S.
2016-02-23
A monitoring and detection system for use on high power laser systems, long distance high power laser systems and tools for performing high power laser operations. In particular, the monitoring and detection systems provide break detection and continuity protection for performing high power laser operations on, and in, remote and difficult to access locations.
GenSAA: A tool for advancing satellite monitoring with graphical expert systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Luczak, Edward C.
1993-01-01
During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real time data for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At the NASA Goddard Space Flight Center, fault-isolation expert systems have been developed to support data monitoring and fault detection tasks in satellite control centers. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.
The Effect of Age and Task Difficulty
ERIC Educational Resources Information Center
Mallo, Jason; Nordstrom, Cynthia R.; Bartels, Lynn K.; Traxler, Anthony
2007-01-01
Electronic Performance Monitoring (EPM) is a common technique used to record employee performance. EPM may include counting computer keystrokes, monitoring employees' phone calls or internet activity, or documenting time spent on work activities. Despite EPM's prevalence, no studies have examined how this management tool affects older workers--a…
Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs
NASA Astrophysics Data System (ADS)
RIngenburg, Michael F.
Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.
This product is an easy-to-use Excel-based macro analysis tool (MAT) for performing comparisons of air sensor data with reference data and interpreting the results. This tool tackles one of the biggest hurdles in citizen-led community air monitoring projects – working with ...
A strip chart recorder pattern recognition tool kit for Shuttle operations
NASA Technical Reports Server (NTRS)
Hammen, David G.; Moebes, Travis A.; Shelton, Robert O.; Savely, Robert T.
1993-01-01
During Space Shuttle operations, Mission Control personnel monitor numerous mission-critical systems such as electrical power; guidance, navigation, and control; and propulsion by means of paper strip chart recorders. For example, electrical power controllers monitor strip chart recorder pen traces to identify onboard electrical equipment activations and deactivations. Recent developments in pattern recognition technologies coupled with new capabilities that distribute real-time Shuttle telemetry data to engineering workstations make it possible to develop computer applications that perform some of the low-level monitoring now performed by controllers. The number of opportunities for such applications suggests a need to build a pattern recognition tool kit to reduce software development effort through software reuse. We are building pattern recognition applications while keeping such a tool kit in mind. We demonstrated the initial prototype application, which identifies electrical equipment activations, during three recent Shuttle flights. This prototype was developed to test the viability of the basic system architecture, to evaluate the performance of several pattern recognition techniques including those based on cross-correlation, neural networks, and statistical methods, to understand the interplay between an advanced automation application and human controllers to enhance utility, and to identify capabilities needed in a more general-purpose tool kit.
Spiral Bevel Gear Damage Detection Using Decision Fusion Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Handschuh, Robert F.; Afjeh, Abdollah A.
2002-01-01
A diagnostic tool for detecting damage to spiral bevel gears was developed. Two different monitoring technologies, oil debris analysis and vibration, were integrated using data fusion into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual monitoring technologies. This diagnostic tool was evaluated by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spiral Bevel Gear Fatigue Rigs. Data was collected during experiments performed in this test rig when pitting damage occurred. Results show that combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spiral bevel gears.
Evaluation of the XSENS Force Shoe on ISS
NASA Technical Reports Server (NTRS)
Hanson, A. M.; Peters, B. T.; Newby, N.; Ploutz-Snyder, L
2014-01-01
The Advanced Resistive Exercise Device (ARED) offers crewmembers a wide range of resistance exercises but does not provide any type of load monitoring; any load data received are based on crew self-report of dialed in load. This lack of real-time ARED load monitoring severely limits research analysis. To address this issue, portable load monitoring technologies are being evaluated to act as a surrogate to ARED's failed instrumentation. The XSENS ForceShoe"TM" is a commercial portable load monitoring tool, and performed well in ground tests. The ForceShoe "TM" was recently deployed on the International Space Station (ISS), and is being evaluated as a tool to monitor ARED loads.
Analysis of Trinity Power Metrics for Automated Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalenko, Ashley Christine
This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.
Floristic Quality Index of Restored Wetlands in Coastal Louisiana
2017-08-01
been used to monitor and assess project performance, resilience, and adaptive management needs. An emerging tool for performing bioassessments in...condition have been used to monitor and assess project performance, resilience, and adaptive management needs. There are three basic levels of wetland...result of saltwater intrusion and rapid subsidence; nevertheless, multiple hydrologic restoration projects (Naomi Outfall Management BA-03c and
Monitoring student attendance, participation, and performance improvement: an instrument and forms.
Kosta, Joanne
2012-01-01
When students receive consistent and fair feedback about their behavior, program liability decreases. To help students to have a clearer understanding of minimum program standards and the consequences of substandard performance, the author developed attendance and participation monitoring and performance improvement instruments. The author discusses the tools that address absenteeism, tardiness, unprofessional, and unsafe clinical behaviors among students.
NASA Astrophysics Data System (ADS)
Pulok, Md Kamrul Hasan
Intelligent and effective monitoring of power system stability in control centers is one of the key issues in smart grid technology to prevent unwanted power system blackouts. Voltage stability analysis is one of the most important requirements for control center operation in smart grid era. With the advent of Phasor Measurement Unit (PMU) or Synchrophasor technology, real time monitoring of voltage stability of power system is now a reality. This work utilizes real-time PMU data to derive a voltage stability index to monitor the voltage stability related contingency situation in power systems. The developed tool uses PMU data to calculate voltage stability index that indicates relative closeness of the instability by producing numerical indices. The IEEE 39 bus, New England power system was modeled and run on a Real-time Digital Simulator that stream PMU data over the Internet using IEEE C37.118 protocol. A Phasor data concentrator (PDC) is setup that receives streaming PMU data and stores them in Microsoft SQL database server. Then the developed voltage stability monitoring (VSM) tool retrieves phasor measurement data from SQL server, performs real-time state estimation of the whole network, calculate voltage stability index, perform real-time ranking of most vulnerable transmission lines, and finally shows all the results in a graphical user interface. All these actions are done in near real-time. Control centers can easily monitor the systems condition by using this tool and can take precautionary actions if needed.
ERIC Educational Resources Information Center
Walker, Dale; Carta, Judith J.; Greenwood, Charles R.; Buzhardt, Joseph F.
2008-01-01
Progress monitoring tools have been shown to be essential elements in current approaches to intervention problem-solving models. Such tools have been valuable not only in marking individual children's level of performance relative to peers but also in measuring change in skill level in a way that can be attributed to intervention and development.…
On the use of high-frequency SCADA data for improved wind turbine performance monitoring
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Stephen, B.; Infield, D.; Melero, J. J.
2017-11-01
SCADA-based condition monitoring of wind turbines facilitates the move from costly corrective repairs towards more proactive maintenance strategies. In this work, we advocate the use of high-frequency SCADA data and quantile regression to build a cost effective performance monitoring tool. The benefits of the approach are demonstrated through the comparison between state-of-the-art deterministic power curve modelling techniques and the suggested probabilistic model. Detection capabilities are compared for low and high-frequency SCADA data, providing evidence for monitoring at higher resolutions. Operational data from healthy and faulty turbines are used to provide a practical example of usage with the proposed tool, effectively achieving the detection of an incipient gearbox malfunction at a time horizon of more than one month prior to the actual occurrence of the failure.
Tapered Roller Bearing Damage Detection Using Decision Fusion Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Kreider, Gary; Fichter, Thomas
2006-01-01
A diagnostic tool was developed for detecting fatigue damage of tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. A diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests conducted using health monitoring hardware. Failure progression tests were performed with tapered roller bearings under simulated engine load conditions. Tests were performed on one healthy bearing and three pre-damaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor and three accelerometers were monitored and recorded for the occurrence of bearing failure. The bearing was removed and inspected periodically for damage progression throughout testing. Using data fusion techniques, two different monitoring technologies, oil debris analysis and vibration, were integrated into a health monitoring system for detecting bearing surface fatigue pitting damage. The data fusion diagnostic tool was evaluated during bearing failure progression tests under simulated engine load conditions. This integrated system showed improved detection of fatigue damage and health assessment of the tapered roller bearings as compared to using individual health monitoring technologies.
Sentinel-2 ArcGIS Tool for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Plesoianu, Alin; Cosmin Sandric, Ionut; Anca, Paula; Vasile, Alexandru; Calugaru, Andreea; Vasile, Cristian; Zavate, Lucian
2017-04-01
This paper addresses one of the biggest challenges regarding Sentinel-2 data, related to the need of an efficient tool to access and process the large collection of images that are available. Consequently, developing a tool for the automation of Sentinel-2 data analysis is the most immediate need. We developed a series of tools for the automation of Sentinel-2 data download and processing for vegetation health monitoring. The tools automatically perform the following operations: downloading image tiles from ESA's Scientific Hub or other venders (Amazon), pre-processing of the images to extract the 10-m bands, creating image composites, applying a series of vegetation indexes (NDVI, OSAVI, etc.) and performing change detection analyses on different temporal data sets. All of these tools run in a dynamic way in the ArcGIS Platform, without the need of creating intermediate datasets (rasters, layers), as the images are processed on-the-fly in order to avoid data duplication. Finally, they allow complete integration with the ArcGIS environment and workflows
NASA Astrophysics Data System (ADS)
Hiwarkar, V. R.; Babitsky, V. I.; Silberschmidt, V. V.
2013-07-01
Numerous techniques are available for monitoring structural health. Most of these techniques are expensive and time-consuming. In this paper, vibration-based techniques are explored together with their use as diagnostic tools for structural health monitoring. Finite-element simulations are used to study the effect of material nonlinearity on dynamics of a cracked bar. Additionally, several experiments are performed to study the effect of vibro-impact behavior of crack on its dynamics. It was observed that a change in the natural frequency of the cracked bar due to crack-tip plasticity and vibro-impact behavior linked to interaction of crack faces, obtained from experiments, led to generation of higher harmonics; this can be used as a diagnostic tool for structural health monitoring.
Field Audit Checklist Tool (FACT)
Download EPA's The Field Audit Checklist Tool (FACT). FACT is intended to help auditors perform field audits, to easily view monitoring plan, quality assurance and emissions data and provides access to data collected under MATS.
Tools for monitoring system suitability in LC MS/MS centric proteomic experiments.
Bereman, Michael S
2015-03-01
With advances in liquid chromatography coupled to tandem mass spectrometry technologies combined with the continued goals of biomarker discovery, clinical applications of established biomarkers, and integrating large multiomic datasets (i.e. "big data"), there remains an urgent need for robust tools to assess instrument performance (i.e. system suitability) in proteomic workflows. To this end, several freely available tools have been introduced that monitor a number of peptide identification (ID) and/or peptide ID free metrics. Peptide ID metrics include numbers of proteins, peptides, or peptide spectral matches identified from a complex mixture. Peptide ID free metrics include retention time reproducibility, full width half maximum, ion injection times, and integrated peptide intensities. The main driving force in the development of these tools is to monitor both intra- and interexperiment performance variability and to identify sources of variation. The purpose of this review is to summarize and evaluate these tools based on versatility, automation, vendor neutrality, metrics monitored, and visualization capabilities. In addition, the implementation of a robust system suitability workflow is discussed in terms of metrics, type of standard, and frequency of evaluation along with the obstacles to overcome prior to incorporating a more proactive approach to overall quality control in liquid chromatography coupled to tandem mass spectrometry based proteomic workflows. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
New methodology to baseline and match AME polysilicon etcher using advanced diagnostic tools
NASA Astrophysics Data System (ADS)
Poppe, James; Shipman, John; Reinhardt, Barbara E.; Roussel, Myriam; Hedgecock, Raymond; Fonda, Arturo
1999-09-01
As process controls tighten in the semiconductor industry, the need to understand the variables that determine system performance become more important. For plasma etch systems, process success depends on the control of key parameters such as: vacuum integrity, pressure, gas flows, and RF power. It is imperative to baseline, monitor, and control these variables. This paper presents an overview of the methods and tools used by Motorola BMC fabrication facility to characterize an Applied Materials polysilicon etcher. Tool performance data obtained from our traditional measurement techniques are limited in their scope and do not provide a complete picture of the ultimate tool performance. Presently the BMC traditional characterization tools provide a snapshot of the static operation of the equipment under test (EUT); however, complete evaluation of the dynamic performance cannot be monitored without the aid of specialized diagnostic equipment. To provide us with a complete system baseline evaluation of the polysilicon etcher, three diagnostic tools were utilized: Lucas Labs Vacuum Diagnostic System, Residual Gas Analyzer, and the ENI Voltage/Impedance Probe. The diagnostic methodology used to baseline and match key parameters of qualified production equipment has had an immense impact on other equipment characterization in the facility. It has resulted in reduced cycle time for new equipment introduction as well.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
ERIC Educational Resources Information Center
Caldarella, Paul; Larsen, Ross A. A.; Williams, Leslie; Wehby, Joseph H.; Wills, Howard; Kamps, Debra
2017-01-01
Numerous well-validated academic progress monitoring tools are used in schools, but there are fewer behavioral progress monitoring measures available. Some brief behavior rating scales have been shown to be effective in monitoring students' progress, but most focus only on students' social skills and do not address critical academic-related…
ERIC Educational Resources Information Center
Caldarella, Paul; Larsen, Ross A. A.; Williams, Leslie; Wehby, Joseph H.; Wills, Howard P.; Kamps, Debra M.
2017-01-01
Numerous well validated academic progress monitoring tools are used in schools, but there are fewer behavioral progress monitoring measures available. Some brief behavior rating scales have been shown to be effective in monitoring students' progress, but most focus only on students' social skills and do not address critical academic-related…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, J.
The U. S. Department of Energy's (DOE) Office of Environmental Management (EM) has the responsibility for cleaning up 60 sites in 22 states that were associated with the legacy of the nation's nuclear weapons program and other research and development activities. These sites are unique and many of the technologies needed to successfully disposition the associated wastes have yet to be developed or would require significant re-engineering to be adapted for future EM cleanup efforts. In 2008, the DOE-EM Engineering and Technology Program (EM-22) released the Engineering and Technology Roadmap in response to Congressional direction and the need to focusmore » on longer term activities required for the completion of the aforementioned cleanup program. One of the strategic initiatives included in the Roadmap was to enhance long term performance monitoring as defined by 'Develop and deploy cost effective long-term strategies and technologies to monitor closure sites (including soil, groundwater, and surface water) with multiple contaminants (organics, metals and radionuclides) to verify integrated long-term cleanup performance'. To support this long-term monitoring (LTM) strategic initiative, EM 22 and the Savannah River National Laboratory (SRNL) organized and held an interactive symposia, known as the 2009 DOE-EM Long-Term Monitoring Technical Forum, to define and prioritize LTM improvement strategies and products that could be realized within a 3 to 5 year investment time frame. This near-term focus on fundamental research would then be used as a foundation for development of applied programs to improve the closure and long-term performance of EM's legacy waste sites. The Technical Forum was held in Atlanta, GA on February 11-12, 2009, and attended by 57 professionals with a focus on identifying those areas of opportunity that would most effectively advance the transition of the current practices to a more effective strategy for the LTM paradigm. The meeting format encompassed three break-out sessions, which focused on needs and opportunities associated with the following LTM technical areas: (1) Performance Monitoring Tools, (2) Systems, and (3) Information Management. The specific objectives of the Technical Forum were to identify: (1) technical targets for reducing EM costs for life-cycle monitoring; (2) cost-effective approaches and tools to support the transition from active to passive remedies at EM waste sites; and (3) specific goals and objectives associated with the lifecycle monitoring initiatives outlined within the Roadmap. The first Breakout Session on LTM performance measurement tools focused on the integration and improvement of LTM performance measurement and monitoring tools that deal with parameters such as ecosystems, boundary conditions, geophysics, remote sensing, biomarkers, ecological indicators and other types of data used in LTM configurations. Although specific tools were discussed, it was recognized that the Breakout Session could not comprehensively discuss all monitoring technologies in the time provided. Attendees provided key references where other organizations have assessed monitoring tools. Three investment sectors were developed in this Breakout Session. The second Breakout Session was on LTM systems. The focus of this session was to identify new and inventive LTM systems addressing the framework for interactive parameters such as infrastructure, sensors, diagnostic features, field screening tools, state of the art characterization monitoring systems/concepts, and ecosystem approaches to site conditions and evolution. LTM systems consist of the combination of data acquisition and management efforts, data processing and analysis efforts and reporting tools. The objective of the LTM systems workgroup was to provide a vision and path towards novel and innovative LTM systems, which should be able to provide relevant, actionable information on system performance in a cost-effective manner. Two investment sectors were developed in this Breakout Session. The last Breakout Session of the Technical Forum was on LTM information management. The session focus was on the development and implementation of novel information management systems for LTM including techniques to address data issues such as: efficient management of large and diverse datasets; consistency and comparability in data management and incorporation of accurate historical information; data interpretation and information synthesis including statistical methods, modeling, and visualization; and linage of data to site management objectives and leveraging information to forge consensus among stakeholders. One investment sector was developed in this Breakout Session.« less
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.
Scanner baseliner monitoring and control in high volume manufacturing
NASA Astrophysics Data System (ADS)
Samudrala, Pavan; Chung, Woong Jae; Aung, Nyan; Subramany, Lokesh; Gao, Haiyong; Gomez, Juan-Manuel
2016-03-01
We analyze performance of different customized models on baseliner overlay data and demonstrate the reduction in overlay residuals by ~10%. Smart Sampling sets were assessed and compared with the full wafer measurements. We found that performance of the grid can still be maintained by going to one-third of total sampling points, while reducing metrology time by 60%. We also demonstrate the feasibility of achieving time to time matching using scanner fleet manager and thus identify the tool drifts even when the tool monitoring controls are within spec limits. We also explore the scanner feedback constant variation with illumination sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.
2012-07-30
Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to bemore » used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.« less
Methodologies and Tools for Tuning Parallel Programs: 80% Art, 20% Science, and 10% Luck
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Bailey, David (Technical Monitor)
1996-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessors. However, without effective means to monitor (and analyze) program execution, tuning the performance of parallel programs becomes exponentially difficult as program complexity and machine size increase. In the past few years, the ubiquitous introduction of performance tuning tools from various supercomputer vendors (Intel's ParAide, TMC's PRISM, CRI's Apprentice, and Convex's CXtrace) seems to indicate the maturity of performance instrumentation/monitor/tuning technologies and vendors'/customers' recognition of their importance. However, a few important questions remain: What kind of performance bottlenecks can these tools detect (or correct)? How time consuming is the performance tuning process? What are some important technical issues that remain to be tackled in this area? This workshop reviews the fundamental concepts involved in analyzing and improving the performance of parallel and heterogeneous message-passing programs. Several alternative strategies will be contrasted, and for each we will describe how currently available tuning tools (e.g. AIMS, ParAide, PRISM, Apprentice, CXtrace, ATExpert, Pablo, IPS-2) can be used to facilitate the process. We will characterize the effectiveness of the tools and methodologies based on actual user experiences at NASA Ames Research Center. Finally, we will discuss their limitations and outline recent approaches taken by vendors and the research community to address them.
Instruction Guide and Macro Analysis Tool for Community-led Air Monitoring
EPA has developed two tools for evaluating the performance of low-cost sensors and interpreting the data they collect to help citizen scientists, communities, and professionals interested in learning about local air quality.
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
The Generic Spacecraft Analyst Assistant (gensaa): a Tool for Developing Graphical Expert Systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M.
1993-01-01
During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real-time data. The analysts must watch for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As the satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At NASA GSFC, fault-isolation expert systems are in operation supporting this data monitoring task. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will readily support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merchant, Bion J
2015-12-22
NetMOD is a tool to model the performance of global ground-based explosion monitoring systems. The version 2.0 of the software supports the simulation of seismic, hydroacoustic, and infrasonic detection capability. The tool provides a user interface to execute simulations based upon a hypothetical definition of the monitoring system configuration, geophysical properties of the Earth, and detection analysis criteria. NetMOD will be distributed with a project file defining the basic performance characteristics of the International Monitoring System (IMS), a network of sensors operated by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Network modeling is needed to be able to assess and explainmore » the potential effect of changes to the IMS, to prioritize station deployment and repair, and to assess the overall CTBTO monitoring capability currently and in the future. Currently the CTBTO uses version 1.0 of NetMOD, provided to them in early 2014. NetMOD will provide a modern tool that will cover all the simulations currently available and allow for the development of additional simulation capabilities of the IMS in the future. NetMOD simulates the performance of monitoring networks by estimating the relative amplitudes of the signal and noise measured at each of the stations within the network based upon known geophysical principles. From these signal and noise estimates, a probability of detection may be determined for each of the stations. The detection probabilities at each of the stations may then be combined to produce an estimate of the detection probability for the entire monitoring network.« less
Use of Semi-Autonomous Tools for ISS Commanding and Monitoring
NASA Technical Reports Server (NTRS)
Brzezinski, Amy S.
2014-01-01
As the International Space Station (ISS) has moved into a utilization phase, operations have shifted to become more ground-based with fewer mission control personnel monitoring and commanding multiple ISS systems. This shift to fewer people monitoring more systems has prompted use of semi-autonomous console tools in the ISS Mission Control Center (MCC) to help flight controllers command and monitor the ISS. These console tools perform routine operational procedures while keeping the human operator "in the loop" to monitor and intervene when off-nominal events arise. Two such tools, the Pre-positioned Load (PPL) Loader and Automatic Operators Recorder Manager (AutoORM), are used by the ISS Communications RF Onboard Networks Utilization Specialist (CRONUS) flight control position. CRONUS is responsible for simultaneously commanding and monitoring the ISS Command & Data Handling (C&DH) and Communications and Tracking (C&T) systems. PPL Loader is used to uplink small pieces of frequently changed software data tables, called PPLs, to ISS computers to support different ISS operations. In order to uplink a PPL, a data load command must be built that contains multiple user-input fields. Next, a multiple step commanding and verification procedure must be performed to enable an onboard computer for software uplink, uplink the PPL, verify the PPL has incorporated correctly, and disable the computer for software uplink. PPL Loader provides different levels of automation in both building and uplinking these commands. In its manual mode, PPL Loader automatically builds the PPL data load commands but allows the flight controller to verify and save the commands for future uplink. In its auto mode, PPL Loader automatically builds the PPL data load commands for flight controller verification, but automatically performs the PPL uplink procedure by sending commands and performing verification checks while notifying CRONUS of procedure step completion. If an off-nominal condition occurs during procedure execution, PPL Loader notifies CRONUS through popup messages, allowing CRONUS to examine the situation and choose an option of how PPL loader should proceed with the procedure. The use of PPL Loader to perform frequent, routine PPL uplinks offloads CRONUS to better monitor two ISS systems. It also reduces procedure performance time and decreases risk of command errors. AutoORM identifies ISS communication outage periods and builds commands to lock, playback, and unlock ISS Operations Recorder files. Operation Recorder files are circular buffer files of continually recorded ISS telemetry data. Sections of these files can be locked from further writing, be played back to capture telemetry data that occurred during an ISS loss of signal (LOS) period, and then be unlocked for future recording use. Downlinked Operation Recorder files are used by mission support teams for data analysis, especially if failures occur during LOS. The commands to lock, playback, and unlock Operations Recorder files are encompassed in three different operational procedures and contain multiple user-input fields. AutoORM provides different levels of automation for building and uplinking the commands to lock, playback, and unlock Operations Recorder files. In its automatic mode, AutoORM automatically detects ISS LOS periods, then generates and uplinks the commands to lock, playback, and unlock Operations Recorder files when MCC regains signal with ISS. AutoORM also features semi-autonomous and manual modes which integrate CRONUS more into the command verification and uplink process. AutoORMs ability to automatically detect ISS LOS periods and build the necessary commands to preserve, playback, and release recorded telemetry data greatly offloads CRONUS to perform more high-level cognitive tasks, such as mission planning and anomaly troubleshooting. Additionally, since Operations Recorder commands contain numerical time input fields which are tedious for a human to manually build, AutoORM's ability to automatically build commands reduces operational command errors. PPL Loader and AutoORM demonstrate principles of semi-autonomous operational tools that will benefit future space mission operations. Both tools employ different levels of automation to perform simple and routine procedures, thereby offloading human operators to perform higher-level cognitive tasks. Because both tools provide procedure execution status and highlight off-nominal indications, the flight controller is able to intervene during procedure execution if needed. Semi-autonomous tools and systems that can perform routine procedures, yet keep human operators informed of execution, will be essential in future long-duration missions where the onboard crew will be solely responsible for spacecraft monitoring and control.
Development and optimization of the Suna trap as a tool for mosquito monitoring and control
2014-01-01
Background Monitoring of malaria vector populations provides information about disease transmission risk, as well as measures of the effectiveness of vector control. The Suna trap is introduced and evaluated with regard to its potential as a new, standardized, odour-baited tool for mosquito monitoring and control. Methods Dual-choice experiments with female Anopheles gambiae sensu lato in a laboratory room and semi-field enclosure, were used to compare catch rates of odour-baited Suna traps and MM-X traps. The relative performance of the Suna trap, CDC light trap and MM-X trap as monitoring tools was assessed inside a human-occupied experimental hut in a semi-field enclosure. Use of the Suna trap as a tool to prevent mosquito house entry was also evaluated in the semi-field enclosure. The optimal hanging height of Suna traps was determined by placing traps at heights ranging from 15 to 105 cm above ground outside houses in western Kenya. Results In the laboratory the mean proportion of An. gambiae s.l. caught in the Suna trap was 3.2 times greater than the MM-X trap (P < 0.001), but the traps performed equally in semi-field conditions (P = 0.615). As a monitoring tool , the Suna trap outperformed an unlit CDC light trap (P < 0.001), but trap performance was equal when the CDC light trap was illuminated (P = 0.127). Suspending a Suna trap outside an experimental hut reduced entry rates by 32.8% (P < 0.001). Under field conditions, suspending the trap at 30 cm above ground resulted in the greatest catch sizes (mean 25.8 An. gambiae s.l. per trap night). Conclusions The performance of the Suna trap equals that of the CDC light trap and MM-X trap when used to sample An. gambiae inside a human-occupied house under semi-field conditions. The trap is effective in sampling mosquitoes outside houses in the field, and the use of a synthetic blend of attractants negates the requirement of a human bait. Hanging a Suna trap outside a house can reduce An. gambiae house entry and its use as a novel tool for reducing malaria transmission risk will be evaluated in peri-domestic settings in sub-Saharan Africa. PMID:24998771
Colditz, Ian G.; Ferguson, Drewe M.; Collins, Teresa; Matthews, Lindsay; Hemsworth, Paul H.
2014-01-01
Simple Summary Benchmarking is a tool widely used in agricultural industries that harnesses the experience of farmers to generate knowledge of practices that lead to better on-farm productivity and performance. We propose, by analogy with production performance, a method for measuring the animal welfare performance of an enterprise and describe a tool for farmers to monitor and improve the animal welfare performance of their business. A general framework is outlined for assessing and monitoring risks to animal welfare based on measures of animals, the environment they are kept in and how they are managed. The tool would enable farmers to continually improve animal welfare. Abstract Schemes for the assessment of farm animal welfare and assurance of welfare standards have proliferated in recent years. An acknowledged short-coming has been the lack of impact of these schemes on the welfare standards achieved on farm due in part to sociological factors concerning their implementation. Here we propose the concept of welfare performance based on a broad set of performance attributes of an enterprise and describe a tool based on risk assessment and benchmarking methods for measuring and managing welfare performance. The tool termed the Unified Field Index is presented in a general form comprising three modules addressing animal, resource, and management factors. Domains within these modules accommodate the principle conceptual perspectives for welfare assessment: biological functioning; emotional states; and naturalness. Pan-enterprise analysis in any livestock sector could be used to benchmark welfare performance of individual enterprises and also provide statistics of welfare performance for the livestock sector. An advantage of this concept of welfare performance is its use of continuous scales of measurement rather than traditional pass/fail measures. Through the feedback provided via benchmarking, the tool should help farmers better engage in on-going improvement of farm practices that affect animal welfare. PMID:26480317
Using GPS to evaluate productivity and performance of forest machine systems
Steven E. Taylor; Timothy P. McDonald; Matthew W. Veal; Ton E. Grift
2001-01-01
This paper reviews recent research and operational applications of using GPS as a tool to help monitor the locations, travel patterns, performance, and productivity of forest machines. The accuracy of dynamic GPS data collected on forest machines under different levels of forest canopy is reviewed first. Then, the paper focuses on the use of GPS for monitoring forest...
Median of patient results as a tool for assessment of analytical stability.
Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György
2015-06-15
In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Transmission Bearing Damage Detection Using Decision Fusion Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Lewicki, David G.; Decker, Harry J.
2004-01-01
A diagnostic tool was developed for detecting fatigue damage to rolling element bearings in an OH-58 main rotor transmission. Two different monitoring technologies, oil debris analysis and vibration, were integrated using data fusion into a health monitoring system for detecting bearing surface fatigue pitting damage. This integrated system showed improved detection and decision-making capabilities as compared to using individual monitoring technologies. This diagnostic tool was evaluated by collecting vibration and oil debris data from tests performed in the NASA Glenn 500 hp Helicopter Transmission Test Stand. Data was collected during experiments performed in this test rig when two unanticipated bearing failures occurred. Results show that combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spiral bevel gears duplex ball bearings and spiral bevel pinion triplex ball bearings in a main rotor transmission.
Errors in patient specimen collection: application of statistical process control.
Dzik, Walter Sunny; Beckman, Neil; Selleng, Kathleen; Heddle, Nancy; Szczepiorkowski, Zbigniew; Wendel, Silvano; Murphy, Michael
2008-10-01
Errors in the collection and labeling of blood samples for pretransfusion testing increase the risk of transfusion-associated patient morbidity and mortality. Statistical process control (SPC) is a recognized method to monitor the performance of a critical process. An easy-to-use SPC method was tested to determine its feasibility as a tool for monitoring quality in transfusion medicine. SPC control charts were adapted to a spreadsheet presentation. Data tabulating the frequency of mislabeled and miscollected blood samples from 10 hospitals in five countries from 2004 to 2006 were used to demonstrate the method. Control charts were produced to monitor process stability. The participating hospitals found the SPC spreadsheet very suitable to monitor the performance of the sample labeling and collection and applied SPC charts to suit their specific needs. One hospital monitored subcategories of sample error in detail. A large hospital monitored the number of wrong-blood-in-tube (WBIT) events. Four smaller-sized facilities, each following the same policy for sample collection, combined their data on WBIT samples into a single control chart. One hospital used the control chart to monitor the effect of an educational intervention. A simple SPC method is described that can monitor the process of sample collection and labeling in any hospital. SPC could be applied to other critical steps in the transfusion processes as a tool for biovigilance and could be used to develop regional or national performance standards for pretransfusion sample collection. A link is provided to download the spreadsheet for free.
Performance Metrics for Monitoring Parallel Program Executions
NASA Technical Reports Server (NTRS)
Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.
NASA Astrophysics Data System (ADS)
Wolk, S. J.; Petreshock, J. G.; Allen, P.; Bartholowmew, R. T.; Isobe, T.; Cresitello-Dittmar, M.; Dewey, D.
The NASA Great Observatory Chandra was launched July 23, 1999 aboard the space shuttle Columbia. The Chandra Science Center (CXC) runs a monitoring and trends analysis program to maximize the science return from this mission. At the time of the launch, the monitoring portion of this system was in place. The system is a collection of multiple threads and programming methodologies acting cohesively. Real-time data are passed to the CXC. Our real-time tool, ACORN (A Comprehensive object-ORiented Necessity), performs limit checking of performance related hardware. Chandra is in ground contact less than 3 hours a day, so the bulk of the monitoring must take place on data dumped by the spacecraft. To do this, we have written several tools which run off of the CXC data system pipelines. MTA_MONITOR_STATIC, limit checks FITS files containing hardware data. MTA_EVENT_MON and MTA_GRAT_MON create quick look data for the focal place instruments and the transmission gratings. When instruments violate their operational limits, the responsible scientists are notified by email and problem tracking is initiated. Output from all these codes is distributed to CXC scientists via HTML interface.
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim; ...
2016-05-18
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
Performance Monitoring of Chilled-Water Distribution Systems Using HVAC-Cx
Ferretti, Natascha Milesi; Galler, Michael A.; Bushby, Steven T.
2017-01-01
In this research we develop, test, and demonstrate the newest extension of the software HVAC-Cx (NIST and CSTB 2014), an automated commissioning tool for detecting common mechanical faults and control errors in chilled-water distribution systems (loops). The commissioning process can improve occupant comfort, ensure the persistence of correct system operation, and reduce energy consumption. Automated tools support the process by decreasing the time and the skill level required to carry out necessary quality assurance measures, and as a result they enable more thorough testing of building heating, ventilating, and air-conditioning (HVAC) systems. This paper describes the algorithm, developed by National Institute of Standards and Technology (NIST), to analyze chilled-water loops and presents the results of a passive monitoring investigation using field data obtained from BACnet® (ASHRAE 2016) controllers and presents field validation of the findings. The tool was successful in detecting faults in system operation in its first field implementation supporting the investigation phase through performance monitoring. Its findings led to a full energy retrocommissioning of the field site. PMID:29167584
Performance Monitoring of Chilled-Water Distribution Systems Using HVAC-Cx.
Ferretti, Natascha Milesi; Galler, Michael A; Bushby, Steven T
2017-01-01
In this research we develop, test, and demonstrate the newest extension of the software HVAC-Cx (NIST and CSTB 2014), an automated commissioning tool for detecting common mechanical faults and control errors in chilled-water distribution systems (loops). The commissioning process can improve occupant comfort, ensure the persistence of correct system operation, and reduce energy consumption. Automated tools support the process by decreasing the time and the skill level required to carry out necessary quality assurance measures, and as a result they enable more thorough testing of building heating, ventilating, and air-conditioning (HVAC) systems. This paper describes the algorithm, developed by National Institute of Standards and Technology (NIST), to analyze chilled-water loops and presents the results of a passive monitoring investigation using field data obtained from BACnet ® (ASHRAE 2016) controllers and presents field validation of the findings. The tool was successful in detecting faults in system operation in its first field implementation supporting the investigation phase through performance monitoring. Its findings led to a full energy retrocommissioning of the field site.
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federationmore » policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.« less
The Empower project - a new way of assessing and monitoring test comparability and stability.
De Grande, Linde A C; Goossens, Kenneth; Van Uytfanghe, Katleen; Stöckl, Dietmar; Thienpont, Linda M
2015-07-01
Manufacturers and laboratories might benefit from using a modern integrated tool for quality management/assurance. The tool should not be confounded by commutability issues and focus on the intrinsic analytical quality and comparability of assays as performed in routine laboratories. In addition, it should enable monitoring of long-term stability of performance, with the possibility to quasi "real-time" remedial action. Therefore, we developed the "Empower" project. The project comprises four pillars: (i) master comparisons with panels of frozen single-donation samples, (ii) monitoring of patient percentiles and (iii) internal quality control data, and (iv) conceptual and statistical education about analytical quality. In the pillars described here (i and ii), state-of-the-art as well as biologically derived specifications are used. In the 2014 master comparisons survey, 125 laboratories forming 8 peer groups participated. It showed not only good intrinsic analytical quality of assays but also assay biases/non-comparability. Although laboratory performance was mostly satisfactory, sometimes huge between-laboratory differences were observed. In patient percentile monitoring, currently, 100 laboratories participate with 182 devices. Particularly, laboratories with a high daily throughput and low patient population variation show a stable moving median in time with good between-instrument concordance. Shifts/drifts due to lot changes are sometimes revealed. There is evidence that outpatient medians mirror the calibration set-points shown in the master comparisons. The Empower project gives manufacturers and laboratories a realistic view on assay quality/comparability as well as stability of performance and/or the reasons for increased variation. Therefore, it is a modern tool for quality management/assurance toward improved patient care.
Force Sensor Based Tool Condition Monitoring Using a Heterogeneous Ensemble Learning Model
Wang, Guofeng; Yang, Yinwei; Li, Zhimeng
2014-01-01
Tool condition monitoring (TCM) plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM), hidden Markov model (HMM) and radius basis function (RBF) are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR) algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability. PMID:25405514
Force sensor based tool condition monitoring using a heterogeneous ensemble learning model.
Wang, Guofeng; Yang, Yinwei; Li, Zhimeng
2014-11-14
Tool condition monitoring (TCM) plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM), hidden Markov model (HMM) and radius basis function (RBF) are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR) algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability.
The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)
1997-01-01
Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.
Management Tools for Bus Maintenance: Current Practices and New Methods. Final Report.
ERIC Educational Resources Information Center
Foerster, James; And Others
Management of bus fleet maintenance requires systematic recordkeeping, management reporting, and work scheduling procedures. Tools for controlling and monitoring routine maintenance activities are in common use. These include defect and fluid consumption reports, work order systems, historical maintenance records, and performance and cost…
DOT National Transportation Integrated Search
2008-12-01
Users guide for a sketch planning tool for exploring policy alternatives. It is intended for an audience of transportation professionals responsible for planning, designing, funding, operating, enforcing, monitoring, and managing HOV and HOT lanes...
Handwriting Skills in Children with Spina Bifida: Assessment, Monitoring and Measurement.
ERIC Educational Resources Information Center
Hancock, Julie; Alston, Jean
1986-01-01
Case studies of three students with spina bifida (ages 8-11) illustrate an individualized six-week handwriting intervention program which stressed assessment, monitoring, and measurement of changes in writing performance. Appropriate changes in physical support (sitting position, writing surface, and choice of writing tool) are recommended. (JW)
ATLAS offline software performance monitoring and optimization
NASA Astrophysics Data System (ADS)
Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration
2014-06-01
In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.
NASA Technical Reports Server (NTRS)
Statler, Irving C. (Editor)
2007-01-01
The Aviation System Monitoring and Modeling (ASMM) Project was one of the projects within NASA s Aviation Safety Program from 1999 through 2005. The objective of the ASMM Project was to develop the technologies to enable the aviation industry to undertake a proactive approach to the management of its system-wide safety risks. The ASMM Project entailed four interdependent elements: (1) Data Analysis Tools Development - develop tools to convert numerical and textual data into information; (2) Intramural Monitoring - test and evaluate the data analysis tools in operational environments; (3) Extramural Monitoring - gain insight into the aviation system performance by surveying its front-line operators; and (4) Modeling and Simulations - provide reliable predictions of the system-wide hazards, their causal factors, and their operational risks that may result from the introduction of new technologies, new procedures, or new operational concepts. This report is a documentation of the history of this highly successful project and of its many accomplishments and contributions to improved safety of the aviation system.
Brown, Alexandra E; Okayasu, Hiromasa; Nzioki, Michael M; Wadood, Mufti Z; Chabot-Couture, Guillaume; Quddus, Arshad; Walker, George; Sutter, Roland W
2014-11-01
Monitoring the quality of supplementary immunization activities (SIAs) is a key tool for polio eradication. Regular monitoring data, however, are often unreliable, showing high coverage levels in virtually all areas, including those with ongoing virus circulation. To address this challenge, lot quality assurance sampling (LQAS) was introduced in 2009 as an additional tool to monitor SIA quality. Now used in 8 countries, LQAS provides a number of programmatic benefits: identifying areas of weak coverage quality with statistical reliability, differentiating areas of varying coverage with greater precision, and allowing for trend analysis of campaign quality. LQAS also accommodates changes to survey format, interpretation thresholds, evaluations of sample size, and data collection through mobile phones to improve timeliness of reporting and allow for visualization of campaign quality. LQAS becomes increasingly important to address remaining gaps in SIA quality and help focus resources on high-risk areas to prevent the continued transmission of wild poliovirus. © Crown copyright 2014.
Developing a tool to preserve eye contact with patients undergoing colonoscopy for pain monitoring
Niv, Yaron; Tal, Yossi
2012-01-01
Colonoscopy has become the leading procedure for early detection and prevention of colorectal cancer. Patients’ experience of colonic endoscopic procedures is scarcely reported, even though it is considered a major factor in colorectal cancer screening participation. Pain due to air inflation or stretching the colon with an endoscope is not rare during examination and may be the main obstacle to cooperation and participation in a screening program. We propose a four-stage study for developing a tool dedicated to pain monitoring during colonoscopy, as follows: (1) comparison of patient, nurse, and endoscopist questionnaire responses about patient pain and technical details of the procedure using the PAINAD tool during colonoscopy; (2) observation of the correlation between patients’ facial expressions and other parameters (using the short PAINAD); (3) development of a device for continuous monitoring of the patient’s facial expression during the procedure; (4) assessment of the usability of such a tool and its contribution to the outcomes of colonoscopy procedures. Early intervention by the staff performing the procedure, in reaction to alerts encoded by this tool, may prevent adverse events during the procedure. PMID:22977314
NASA Astrophysics Data System (ADS)
Johnson, Nicholas E.; Bonczak, Bartosz; Kontokosta, Constantine E.
2018-07-01
The increased availability and improved quality of new sensing technologies have catalyzed a growing body of research to evaluate and leverage these tools in order to quantify and describe urban environments. Air quality, in particular, has received greater attention because of the well-established links to serious respiratory illnesses and the unprecedented levels of air pollution in developed and developing countries and cities around the world. Though numerous laboratory and field evaluation studies have begun to explore the use and potential of low-cost air quality monitoring devices, the performance and stability of these tools has not been adequately evaluated in complex urban environments, and further research is needed. In this study, we present the design of a low-cost air quality monitoring platform based on the Shinyei PPD42 aerosol monitor and examine the suitability of the sensor for deployment in a dense heterogeneous urban environment. We assess the sensor's performance during a field calibration campaign from February 7th to March 25th 2017 with a reference instrument in New York City, and present a novel calibration approach using a machine learning method that incorporates publicly available meteorological data in order to improve overall sensor performance. We find that while the PPD42 performs well in relation to the reference instrument using linear regression (R2 = 0.36-0.51), a gradient boosting regression tree model can significantly improve device calibration (R2 = 0.68-0.76). We discuss the sensor's performance and reliability when deployed in a dense, heterogeneous urban environment during a period of significant variation in weather conditions, and important considerations when using machine learning techniques to improve the performance of low-cost air quality monitors.
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
In-Line Monitoring of Fab Processing Using X-Ray Diffraction
NASA Astrophysics Data System (ADS)
Gittleman, Bruce; Kozaczek, Kris
2005-09-01
As the materials shift that started with Cu continues to advance in the semiconductor industry, new issues related to materials microstructure have arisen. While x-ray diffraction (XRD) has long been used in development applications, in this paper we show that results generated in real time by a unique, high throughput, fully automated XRD metrology tool can be used to develop metrics for qualification and monitoring of critical processes in current and future manufacturing. It will be shown that these metrics provide a unique set of data that correlate to manufacturing issues. For example, ionized-sputtering is the current deposition method of choice for both the Cu seed and TaNx/Ta barrier layers. The alpha phase of Ta is widely used in production for the upper layer of the barrier stack, but complete elimination of the beta phase requires a TaNx layer with sufficient N content, but not so much as to start poisoning the target and generating particle issues. This is a well documented issue, but traditional monitoring by sheet resistance methods cannot guarantee the absence of the beta phase, whereas XRD can determine the presence of even small amounts of beta. Nickel silicide for gate metallization is another example where monitoring of phase is critical. As well being able to qualify an anneal process that gives only the desired NiSi phase everywhere across the wafer, XRD can be used to determine if full silicidation of the Ni has occurred and characterize the crystallographic microstructure of the Ni to determine any effect of that microstructure on the anneal process. The post-anneal nickel silicide phase and uniformity of the silicide microstructure can all be monitored in production. Other examples of the application of XRD to process qualification and production monitoring are derived from the dependence of certain processes, some types of defect generation, and device performance on crystallographic texture. The data presented will show that CMP dishing problems could be traced to texture of the barrier layer and mitigated by adjusting the barrier process. The density of pits developed during CMP of electrochemically deposited (ECD) Cu depends on the fraction of (111) oriented grains. It must be emphasized that the crystallographic texture is not only a key parameter for qualification of high yielding and reliable processes, but also serves as a critical parameter for monitoring tool health. The texture of Cu and W are sensitive not only to deviations in performance of the tool depositing or annealing a particular film, but also highly sensitive to the texture of the barrier underlayers and thus any performance deviations in those tools. The XRD metrology tool has been designed with production monitoring in mind and has been fully integrated into both 200 mm and 300 mm fabs. Rapid analysis is achieved by using a high intensity fixed x-ray source, coupled with a large area 2D detector. The output metrics from one point are generated while the tool is measuring a subsequent point, giving true on-the-fly analysis; no post-processing of data is necessary. Spatial resolution on the wafer surface ranging from 35 μm to 1 mm is available, making the tool suitable for monitoring of product wafers. Typical analysis times range from 10 seconds to 2 minutes per point, depending on the film thickness and spot size. Current metrics used for process qualification and production monitoring are phase, FWHM of the primary phase peaks (for mean grain size tracking), and crystallographic texture.
Durkin, Gregory J
2010-01-01
A wide variety of evaluation formats are available for new graduate nurses, but most of them are single-point evaluation tools that do not provide a clear picture of progress for orientee or educator. This article describes the development of a Web-based evaluation tool that combines learning taxonomies with the Synergy model into a rating scale based on independent performance. The evaluation tool and process provides open 24/7 access to evaluation documentation for members of the orientation team, demystifying the process and clarifying expectations. The implementation of the tool has proven to be transformative in the perceptions of evaluation and performance expectations of new graduates. This tool has been successful at monitoring progress, altering education, and opening dialogue about performance for over 125 new graduate nurses since inception.
Data Fusion Tool for Spiral Bevel Gear Condition Indicator Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Antolick, Lance J.; Branning, Jeremy S.; Thomas, Josiah
2014-01-01
Tests were performed on two spiral bevel gear sets in the NASA Glenn Spiral Bevel Gear Fatigue Test Rig to simulate the fielded failures of spiral bevel gears installed in a helicopter. Gear sets were tested until damage initiated and progressed on two or more gear or pinion teeth. During testing, gear health monitoring data was collected with two different health monitoring systems. Operational parameters were measured with a third data acquisition system. Tooth damage progression was documented with photographs taken at inspection intervals throughout the test. A software tool was developed for fusing the operational data and the vibration based gear condition indicator (CI) data collected from the two health monitoring systems. Results of this study illustrate the benefits of combining the data from all three systems to indicate progression of damage for spiral bevel gears. The tool also enabled evaluation of the effectiveness of each CI with respect to operational conditions and fault mode.
Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong
2016-05-31
Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time-frequency domains. The key features are selected based on Pearson's Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL.
Hemodynamic monitoring in the critically ill.
Voga, G
1995-06-01
Monitoring of vital functions is one of the most important and essential tools in the management of critically ill patients in the ICU. Today it is possible to detect and analyze a great variety of physiological signals by various noninvasive and invasive techniques. An intensivist should be able to select and perform the most appropriate monitoring method for the individual patient considering risk-benefit ratio of the particular monitoring technique and the need for immediate therapy, specific diagnosis, continuous monitoring and evaluation of morphology should be included. Despite rapid development of noninvasive monitoring techniques, invasive hemodynamic monitoring in still one of the most basic ICU procedures. It enables monitoring of pressures, flow and saturation, pressures in the systemic and pulmonary circulation, estimation of cardiac performance and judgment of the adequacy of the cardiocirculatory system. Carefully and correctly obtained information are basis for proper hemodynamic assessment which usually effects the therapeutic decisions.
Issues in implementing a knowledge-based ECG analyzer for personal mobile health monitoring.
Goh, K W; Kim, E; Lavanya, J; Kim, Y; Soh, C B
2006-01-01
Advances in sensor technology, personal mobile devices, and wireless broadband communications are enabling the development of an integrated personal mobile health monitoring system that can provide patients with a useful tool to assess their own health and manage their personal health information anytime and anywhere. Personal mobile devices, such as PDAs and mobile phones, are becoming more powerful integrated information management tools and play a major role in many people's lives. We focus on designing a health-monitoring system for people who suffer from cardiac arrhythmias. We have developed computer simulation models to evaluate the performance of appropriate electrocardiogram (ECG) analysis techniques that can be implemented on personal mobile devices. This paper describes an ECG analyzer to perform ECG beat and episode detection and classification. We have obtained promising preliminary results from our study. Also, we discuss several key considerations when implementing a mobile health monitoring solution. The mobile ECG analyzer would become a front-end patient health data acquisition module, which is connected to the Personal Health Information Management System (PHIMS) for data repository.
Analytical and Clinical Performance of Blood Glucose Monitors
Boren, Suzanne Austin; Clarke, William L.
2010-01-01
Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171
Cobbledick, Jeffrey; Nguyen, Alexander; Latulippe, David R
2014-07-01
The current challenges associated with the design and operation of net-energy positive wastewater treatment plants demand sophisticated approaches for the monitoring of polymer-induced flocculation. In anaerobic digestion (AD) processes, the dewaterability of the sludge is typically assessed from off-line lab-bench tests - the capillary suction time (CST) test is one of the most common. Focused beam reflectance measurement (FBRM) is a promising technique for real-time monitoring of critical performance attributes in large scale processes and is ideally suited for dewatering applications. The flocculation performance of twenty-four cationic polymers, that spanned a range of polymer size and charge properties, was measured using both the FBRM and CST tests. Analysis of the data revealed a decreasing monotonic trend; the samples that had the highest percent removal of particles less than 50 microns in size as determined by FBRM had the lowest CST values. A subset of the best performing polymers was used to evaluate the effects of dosage amount and digestate sources on dewatering performance. The results from this work show that FBRM is a powerful tool that can be used for optimization and on-line monitoring of dewatering processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Using SFOC to fly the Magellan Venus mapping mission
NASA Technical Reports Server (NTRS)
Bucher, Allen W.; Leonard, Robert E., Jr.; Short, Owen G.
1993-01-01
Traditionally, spacecraft flight operations at the Jet Propulsion Laboratory (JPL) have been performed by teams of spacecraft experts utilizing ground software designed specifically for the current mission. The Jet Propulsion Laboratory set out to reduce the cost of spacecraft mission operations by designing ground data processing software that could be used by multiple spacecraft missions, either sequentially or concurrently. The Space Flight Operations Center (SFOC) System was developed to provide the ground data system capabilities needed to monitor several spacecraft simultaneously and provide enough flexibility to meet the specific needs of individual projects. The Magellan Spacecraft Team utilizes the SFOC hardware and software designed for engineering telemetry analysis, both real-time and non-real-time. The flexibility of the SFOC System has allowed the spacecraft team to integrate their own tools with SFOC tools to perform the tasks required to operate a spacecraft mission. This paper describes how the Magellan Spacecraft Team is utilizing the SFOC System in conjunction with their own software tools to perform the required tasks of spacecraft event monitoring as well as engineering data analysis and trending.
Development and Validation of the Student Tool for Technology Literacy (ST[superscript 2]L)
ERIC Educational Resources Information Center
Hohlfeld, Tina N.; Ritzhaupt, Albert D.; Barron, Ann E.
2010-01-01
This article provides an overview of the development and validation of the Student Tool for Technology Literacy (ST[superscript 2]L). Developing valid and reliable objective performance measures for monitoring technology literacy is important to all organizations charged with equipping students with the technology skills needed to successfully…
ConfocalCheck - A Software Tool for the Automated Monitoring of Confocal Microscope Performance
Hng, Keng Imm; Dormann, Dirk
2013-01-01
Laser scanning confocal microscopy has become an invaluable tool in biomedical research but regular quality testing is vital to maintain the system’s performance for diagnostic and research purposes. Although many methods have been devised over the years to characterise specific aspects of a confocal microscope like measuring the optical point spread function or the field illumination, only very few analysis tools are available. Our aim was to develop a comprehensive quality assurance framework ranging from image acquisition to automated analysis and documentation. We created standardised test data to assess the performance of the lasers, the objective lenses and other key components required for optimum confocal operation. The ConfocalCheck software presented here analyses the data fully automatically. It creates numerous visual outputs indicating potential issues requiring further investigation. By storing results in a web browser compatible file format the software greatly simplifies record keeping allowing the operator to quickly compare old and new data and to spot developing trends. We demonstrate that the systematic monitoring of confocal performance is essential in a core facility environment and how the quantitative measurements obtained can be used for the detailed characterisation of system components as well as for comparisons across multiple instruments. PMID:24224017
User-level framework for performance monitoring of HPC applications
NASA Astrophysics Data System (ADS)
Hristova, R.; Goranov, G.
2013-10-01
HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.
Gaussian process regression for tool wear prediction
NASA Astrophysics Data System (ADS)
Kong, Dongdong; Chen, Yongjie; Li, Ning
2018-05-01
To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.
Monitoring and Reporting Tools of the International Data Centre and International Monitoring System
NASA Astrophysics Data System (ADS)
Lastowka, L.; Anichenko, A.; Galindo, M.; Villagran Herrera, M.; Mori, S.; Malakhova, M.; Daly, T.; Otsuka, R.; Stangel, H.
2007-05-01
The Comprehensive Test-Ban Treaty (CTBT) which prohibits all nuclear explosions was opened for signature in 1996. Since then, the Preparatory Commission for the CTBT Organization has been working towards the establishment of a global verification regime to monitor compliance with the ban on nuclear testing. The International Monitoring System (IMS) comprises facilities for seismic, hydroacoustic, infrasound and radionuclide monitoring, and the means of communication. This system is supported by the International Data Centre (IDC), which provides objective products and services necessary for effective global monitoring. Upon completion of the IMS, 321 stations will be contributing to both near real-time and reviewed data products. Currently there are 194 facilities in IDC operations. This number is expected to increase by about 40% over the next few years, necessitating methods and tools to effectively handle the expansion. The requirements of high data availability as well as operational transparency are fundamental principals of IMS network operations, therefore, a suite of tools for monitoring and reporting have been developed. These include applications for monitoring Global Communication Infrastructure (GCI) links, detecting outages in continuous and segmented data, monitoring the status of data processing and forwarding to member states, and for systematic electronic communication and problem ticketing. The operation of the IMS network requires the help of local specialists whose cooperation is in some cases ensured by contracts or other agreements. The PTS (Provisional Technical Secretariat) strives to make the monitoring of the IMS as standardized and efficient as possible, and has therefore created the Operations Centre in which the use of most the tools are centralized. Recently the tasks of operations across all technologies, including the GCI, have been centralized within a single section of the organization. To harmonize the operations, an ongoing State of Health monitoring project will provide an integrated view of network, station and GCI performance and will provide system metrics. Comprehensive procedures will be developed to utilize this tool. However, as the IMS network expands, easier access to more information will cause additional challenges, mainly with human resources, to analyze and manage these metrics.
Data Intensive Computing on Amazon Web Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magana-Zook, S. A.
The Geophysical Monitoring Program (GMP) has spent the past few years building up the capability to perform data intensive computing using what have been referred to as “big data” tools. These big data tools would be used against massive archives of seismic signals (>300 TB) to conduct research not previously possible. Examples of such tools include Hadoop (HDFS, MapReduce), HBase, Hive, Storm, Spark, Solr, and many more by the day. These tools are useful for performing data analytics on datasets that exceed the resources of traditional analytic approaches. To this end, a research big data cluster (“Cluster A”) was setmore » up as a collaboration between GMP and Livermore Computing (LC).« less
In situ monitoring of cocrystals in formulation development using low-frequency Raman spectroscopy.
Otaki, Takashi; Tanabe, Yuta; Kojima, Takashi; Miura, Masaru; Ikeda, Yukihiro; Koide, Tatsuo; Fukami, Toshiro
2018-05-05
In recent years, to guarantee a quality-by-design approach to the development of pharmaceutical products, it is important to identify properties of raw materials and excipients in order to determine critical process parameters and critical quality attributes. Feedback obtained from real-time analyses using various process analytical technology (PAT) tools has been actively investigated. In this study, in situ monitoring using low-frequency (LF) Raman spectroscopy (10-200 cm -1 ), which may have higher discriminative ability among polymorphs than near-infrared spectroscopy and conventional Raman spectroscopy (200-1800 cm -1 ), was investigated as a possible application to PAT. This is because LF-Raman spectroscopy obtains information about intermolecular and/or lattice vibrations in the solid state. The monitoring results obtained from Furosemide/Nicotinamide cocrystal indicate that LF-Raman spectroscopy is applicable to in situ monitoring of suspension and fluidized bed granulation processes, and is an effective technique as a PAT tool to detect the conversion risk of cocrystals. LF-Raman spectroscopy is also used as a PAT tool to monitor reactions, crystallizations, and manufacturing processes of drug substances and products. In addition, a sequence of conversion behaviors of Furosemide/Nicotinamide cocrystals was determined by performing in situ monitoring for the first time. Copyright © 2018 Elsevier B.V. All rights reserved.
Monitoring tools of COMPASS experiment at CERN
NASA Astrophysics Data System (ADS)
Bodlak, M.; Frolov, V.; Huber, S.; Jary, V.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Tomsa, J.; Virius, M.
2015-12-01
This paper briefly introduces the data acquisition system of the COMPASS experiment and is mainly focused on the part that is responsible for the monitoring of the nodes in the whole newly developed data acquisition system of this experiment. The COMPASS is a high energy particle experiment with a fixed target located at the SPS of the CERN laboratory in Geneva, Switzerland. The hardware of the data acquisition system has been upgraded to use FPGA cards that are responsible for data multiplexing and event building. The software counterpart of the system includes several processes deployed in heterogenous network environment. There are two processes, namely Message Logger and Message Browser, taking care of monitoring. These tools handle messages generated by nodes in the system. While Message Logger collects and saves messages to the database, the Message Browser serves as a graphical interface over the database containing these messages. For better performance, certain database optimizations have been used. Lastly, results of performance tests are presented.
Initiating an Online Reputation Monitoring System with Open Source Analytics Tools
NASA Astrophysics Data System (ADS)
Shuhud, Mohd Ilias M.; Alwi, Najwa Hayaati Md; Halim, Azni Haslizan Abd
2018-05-01
Online reputation is an invaluable asset for modern organizations as it can help in business performance especially in sales and profit. However, if we are not aware of our reputation, it is difficult to maintain it. Thus, social media analytics is a new tool that can provide online reputation monitoring in various ways such as sentiment analysis. As a result, numerous large-scale organizations have implemented Online Reputation Monitoring (ORM) systems. However, this solution is not supposed to be exclusively for high-income organizations, as many organizations regardless sizes and types are now online. This research attempts to propose an affordable and reliable ORM system using combination of open source analytics tools for both novice practitioners and academicians. We also evaluate its prediction accuracy and we discovered that the system provides acceptable predictions (sixty percent accuracy) and demonstrate a tally prediction of major polarity by human annotation. The proposed system can help in supporting business decisions with flexible monitoring strategies especially for organization that want to initiate and administrate ORM themselves at low cost.
Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D
2016-04-01
The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.
NASA Astrophysics Data System (ADS)
Andrade, P.; Fiorini, B.; Murphy, S.; Pigueiras, L.; Santos, M.
2015-12-01
Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud provisioning and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over 11000 multi-core processor servers, 130 PB disk servers, 100 PB tape robots, and 150 high performance tape drives. To cope with these developments, an evolution of the data centre monitoring tools was also required. This modernisation was based on a number of guiding rules: sustain the increase of resources, adapt to the new dynamic nature of the data centres, make monitoring data easier to share, give more flexibility to Service Managers on how they publish and consume monitoring metrics and logs, establish a common repository of monitoring data, optimise the handling of monitoring notifications, and replace the previous toolset by new open source technologies with large adoption and community support. This contribution describes how these improvements were delivered, present the architecture and technologies of the new monitoring tools, and review the experience of its production deployment.
Learning to Write: Progress-Monitoring Tools for Beginning and at-Risk Writers
ERIC Educational Resources Information Center
Ritchey, Kristen D.
2006-01-01
Teachers now have a wide range of tools to help assess the beginning reading performance of kindergarten and first-grade children. However, validated procedures for assessing the beginning writing skills of kindergarten and first-grade children are less widely available. Learning to write, like learning to read, is a complex task. The ability to…
7 CFR 275.19 - Monitoring and evaluation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING SYSTEM Corrective Action § 275.19... data available through program management tools and other sources. (c) In instances where the State...
7 CFR 275.19 - Monitoring and evaluation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING SYSTEM Corrective Action § 275.19... data available through program management tools and other sources. (c) In instances where the State...
7 CFR 275.19 - Monitoring and evaluation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING SYSTEM Corrective Action § 275.19... data available through program management tools and other sources. (c) In instances where the State...
7 CFR 275.19 - Monitoring and evaluation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING SYSTEM Corrective Action § 275.19... data available through program management tools and other sources. (c) In instances where the State...
Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks.
Al Hajj, Hassan; Lamard, Mathieu; Conze, Pierre-Henri; Cochener, Béatrice; Quellec, Gwenolé
2018-05-09
This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of A z =0.9961 and A z =0.9939, respectively, in offline mode (using past, present and future information), and A z =0.9957 and A z =0.9936, respectively, in online mode (using past and present information only). Copyright © 2018 Elsevier B.V. All rights reserved.
Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2
NASA Technical Reports Server (NTRS)
Debrunner, Linda S.
1994-01-01
The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
Glucose Biosensors: An Overview of Use in Clinical Practice
Yoo, Eun-Hyung; Lee, Soo-Youn
2010-01-01
Blood glucose monitoring has been established as a valuable tool in the management of diabetes. Since maintaining normal blood glucose levels is recommended, a series of suitable glucose biosensors have been developed. During the last 50 years, glucose biosensor technology including point-of-care devices, continuous glucose monitoring systems and noninvasive glucose monitoring systems has been significantly improved. However, there continues to be several challenges related to the achievement of accurate and reliable glucose monitoring. Further technical improvements in glucose biosensors, standardization of the analytical goals for their performance, and continuously assessing and training lay users are required. This article reviews the brief history, basic principles, analytical performance, and the present status of glucose biosensors in the clinical practice. PMID:22399892
Resilient Monitoring Systems: Architecture, Design, and Application to Boiler/Turbine Plant
Garcia, Humberto E.; Lin, Wen-Chiao; Meerkov, Semyon M.; ...
2014-11-01
Resilient monitoring systems, considered in this paper, are sensor networks that degrade gracefully under malicious attacks on their sensors, causing them to project misleading information. The goal of this work is to design, analyze, and evaluate the performance of a resilient monitoring system intended to monitor plant conditions (normal or anomalous). The architecture developed consists of four layers: data quality assessment, process variable assessment, plant condition assessment, and sensor network adaptation. Each of these layers is analyzed by either analytical or numerical tools. The performance of the overall system is evaluated using a simplified boiler/turbine plant. The measure of resiliencymore » is quantified using Kullback-Leibler divergence, and is shown to be sufficiently high in all scenarios considered.« less
Resilient monitoring systems: architecture, design, and application to boiler/turbine plant.
Garcia, Humberto E; Lin, Wen-Chiao; Meerkov, Semyon M; Ravichandran, Maruthi T
2014-11-01
Resilient monitoring systems, considered in this paper, are sensor networks that degrade gracefully under malicious attacks on their sensors, causing them to project misleading information. The goal of this paper is to design, analyze, and evaluate the performance of a resilient monitoring system intended to monitor plant conditions (normal or anomalous). The architecture developed consists of four layers: data quality assessment, process variable assessment, plant condition assessment, and sensor network adaptation. Each of these layers is analyzed by either analytical or numerical tools. The performance of the overall system is evaluated using a simplified boiler/turbine plant. The measure of resiliency is quantified based on the Kullback-Leibler divergence and shown to be sufficiently high in all scenarios considered.
Place, Jérôme; Robert, Antoine; Brahim, Najib Ben; Patrick, Keith-Hynes; Farret, Anne; Marie-Josée, Pelletier; Buckingham, Bruce; Breton, Marc; Kovatchev, Boris; Renard, Eric
2013-01-01
Background Developments in an artificial pancreas (AP) for patients with type 1 diabetes have allowed a move toward performing outpatient clinical trials. “Home-like” environment implies specific protocol and system adaptations among which the introduction of remote monitoring is meaningful. We present a novel tool allowing multiple patients to monitor AP use in home-like settings. Methods We investigated existing systems, performed interviews of experienced clinical teams, listed required features, and drew several mockups of the user interface. The resulting application was tested on the bench before it was used in three outpatient studies representing 3480 h of remote monitoring. Results Our tool, called DiAs Web Monitoring (DWM), is a web-based application that ensures reception, storage, and display of data sent by AP systems. Continuous glucose monitoring (CGM) and insulin delivery data are presented in a colored chart to facilitate reading and interpretation. Several subjects can be monitored simultaneously on the same screen, and alerts are triggered to help detect events such as hypoglycemia or CGM failures. In the third trial, DWM received approximately 460 data per subject per hour: 77% for log messages, 5% for CGM data. More than 97% of transmissions were achieved in less than 5 min. Conclusions Transition from a hospital setting to home-like conditions requires specific AP supervision to which remote monitoring systems can contribute valuably. DiAs Web Monitoring worked properly when tested in our outpatient studies. It could facilitate subject monitoring and even accelerate medical and technical assessment of the AP. It should now be adapted for long-term studies with an enhanced notification feature. J Diabetes Sci Technol 2013;7(6):1427–1435 PMID:24351169
Place, Jérôme; Robert, Antoine; Ben Brahim, Najib; Keith-Hynes, Patrick; Farret, Anne; Pelletier, Marie-Josée; Buckingham, Bruce; Breton, Marc; Kovatchev, Boris; Renard, Eric
2013-11-01
Developments in an artificial pancreas (AP) for patients with type 1 diabetes have allowed a move toward performing outpatient clinical trials. "Home-like" environment implies specific protocol and system adaptations among which the introduction of remote monitoring is meaningful. We present a novel tool allowing multiple patients to monitor AP use in home-like settings. We investigated existing systems, performed interviews of experienced clinical teams, listed required features, and drew several mockups of the user interface. The resulting application was tested on the bench before it was used in three outpatient studies representing 3480 h of remote monitoring. Our tool, called DiAs Web Monitoring (DWM), is a web-based application that ensures reception, storage, and display of data sent by AP systems. Continuous glucose monitoring (CGM) and insulin delivery data are presented in a colored chart to facilitate reading and interpretation. Several subjects can be monitored simultaneously on the same screen, and alerts are triggered to help detect events such as hypoglycemia or CGM failures. In the third trial, DWM received approximately 460 data per subject per hour: 77% for log messages, 5% for CGM data. More than 97% of transmissions were achieved in less than 5 min. Transition from a hospital setting to home-like conditions requires specific AP supervision to which remote monitoring systems can contribute valuably. DiAs Web Monitoring worked properly when tested in our outpatient studies. It could facilitate subject monitoring and even accelerate medical and technical assessment of the AP. It should now be adapted for long-term studies with an enhanced notification feature. © 2013 Diabetes Technology Society.
A proactive system for maritime environment monitoring.
Moroni, Davide; Pieri, Gabriele; Tampucci, Marco; Salvetti, Ovidio
2016-01-30
The ability to remotely detect and monitor oil spills is becoming increasingly important due to the high demand of oil-based products. Indeed, shipping routes are becoming very crowded and the likelihood of oil slick occurrence is increasing. In this frame, a fully integrated remote sensing system can be a valuable monitoring tool. We propose an integrated and interoperable system able to monitor ship traffic and marine operators, using sensing capabilities from a variety of electronic sensors, along with geo-positioning tools, and through a communication infrastructure. Our system is capable of transferring heterogeneous data, freely and seamlessly, between different elements of the information system (and their users) in a consistent and usable form. The system also integrates a collection of decision support services providing proactive functionalities. Such services demonstrate the potentiality of the system in facilitating dynamic links among different data, models and actors, as indicated by the performed field tests. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hehmke, Bernd; Berg, Sabine; Salzsieder, Eckhard
2017-05-01
Continuous standardized verification of the accuracy of blood glucose meter systems for self-monitoring after their introduction into the market is an important clinically tool to assure reliable performance of subsequently released lots of strips. Moreover, such published verification studies permit comparison of different blood glucose monitoring systems and, thus, are increasingly involved in the process of evidence-based purchase decision making.
A graphic system for telemetry monitoring and procedure performing at the Telecom SCC
NASA Technical Reports Server (NTRS)
Loubeyre, Jean Philippe
1994-01-01
The increasing amount of telemetry parameters and the increasing complexity of procedures used for the in-orbit satellite follow-up has led to the development of new tools for telemetry monitoring and procedures performing. The name of the system presented here is Graphic Server. It provides an advanced graphic representation of the satellite subsystems, including real-time telemetry and alarm displaying, and a powerful help for decision making with on line contingency procedures. Used for 2.5 years at the TELECOM S.C.C. for procedure performing, it has become an essential part of the S.C.C.
A low-cost sensing system for cooperative air quality monitoring in urban areas.
Brienza, Simone; Galli, Andrea; Anastasi, Giuseppe; Bruschi, Paolo
2015-05-26
Air quality in urban areas is a very important topic as it closely affects the health of citizens. Recent studies highlight that the exposure to polluted air can increase the incidence of diseases and deteriorate the quality of life. Hence, it is necessary to develop tools for real-time air quality monitoring, so as to allow appropriate and timely decisions. In this paper, we present uSense, a low-cost cooperative monitoring tool that allows knowing, in real-time, the concentrations of polluting gases in various areas of the city. Specifically, users monitor the areas of their interest by deploying low-cost and low-power sensor nodes. In addition, they can share the collected data following a social networking approach. uSense has been tested through an in-field experimentation performed in different areas of a city. The obtained results are in line with those provided by the local environmental control authority and show that uSense can be profitably used for air quality monitoring.
Object-oriented Approach to High-level Network Monitoring and Management
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2000-01-01
An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.
NASA Astrophysics Data System (ADS)
Cordova, Martin; Serio, Andrew; Meza, Francisco; Arriagada, Gustavo; Swett, Hector; Ball, Jesse; Collins, Paul; Masuda, Neal; Fuentes, Javier
2016-07-01
In 2014 Gemini Observatory started the base facility operations (BFO) project. The project's goal was to provide the ability to operate the two Gemini telescopes from their base facilities (respectively Hilo, HI at Gemini North, and La Serena, Chile at Gemini South). BFO was identified as a key project for Gemini's transition program, as it created an opportunity to reduce operational costs. In November 2015, the Gemini North telescope started operating from the base facility in Hilo, Hawaii. In order to provide the remote operator the tools to work from the base, many of the activities that were normally performed by the night staff at the summit were replaced with new systems and tools. This paper describes some of the key systems and tools implemented for environmental monitoring, and the design used in the implementation at the Gemini North telescope.
Mossavar-Rahmani, Yasmin; Henry, Holly; Rodabough, Rebecca; Bragg, Charlotte; Brewer, Amy; Freed, Trish; Kinzel, Laura; Pedersen, Margaret; Soule, C Oehme; Vosburg, Shirley
2004-01-01
Self-monitoring promotes behavior changes by promoting awareness of eating habits and creates self-efficacy. It is an important component of the Women's Health Initiative dietary intervention. During the first year of intervention, 74% of the total sample of 19,542 dietary intervention participants self-monitored. As the study progressed the self-monitoring rate declined to 59% by spring 2000. Participants were challenged by inability to accurately estimate fat content of restaurant foods and the inconvenience of carrying bulky self-monitoring tools. In 1996, a Self-Monitoring Working Group was organized to develop additional self-monitoring options that were responsive to participant needs. This article describes the original and additional self-monitoring tools and trends in tool use over time. Original tools were the Food Diary and Fat Scan. Additional tools include the Keeping Track of Goals, Quick Scan, Picture Tracker, and Eating Pattern Changes instruments. The additional tools were used by the majority of participants (5,353 of 10,260 or 52% of participants who were self-monitoring) by spring 2000. Developing self-monitoring tools that are responsive to participant needs increases the likelihood that self-monitoring can enhance dietary reporting adherence, especially in long-term clinical trials.
The evolution of CMS software performance studies
NASA Astrophysics Data System (ADS)
Kortelainen, M. J.; Elmer, P.; Eulisse, G.; Innocente, V.; Jones, C. D.; Tuura, L.
2011-12-01
CMS has had an ongoing and dedicated effort to optimize software performance for several years. Initially this effort focused primarily on the cleanup of many issues coming from basic C++ errors, namely reducing dynamic memory churn, unnecessary copies/temporaries and tools to routinely monitor these things. Over the past 1.5 years, however, the transition to 64bit, newer versions of the gcc compiler, newer tools and the enabling of techniques like vectorization have made possible more sophisticated improvements to the software performance. This presentation will cover this evolution and describe the current avenues being pursued for software performance, as well as the corresponding gains.
ERIC Educational Resources Information Center
de Bruin, Anique B. H.; Kok, Ellen M.; Lobbestael, Jill; de Grip, Andries
2017-01-01
Being overconfident when estimating scores for an upcoming exam is a widespread phenomenon in higher education and presents threats to self-regulated learning and academic performance. The present study sought to investigate how overconfidence and poor monitoring accuracy vary over the length of a college course, and how an intervention consisting…
Satisfaction monitoring for quality control in campground management
Wilbur F. LaPage; Malcolm I. Bevins
1981-01-01
A 4-year study of camper satisfaction indicates that satisfaction monitoring is a useful tool for campground managers to assess their performance and achieve a high level of quality control in their service to the public. An indication of camper satisfaction with campground management is gained from a report card on which a small sample of visitors rates 14 elements of...
Simulation of car movement along circular path
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Tikhov-Tinnikov, D. A.; Ovchinnikova, N. I.; Lysenko, A. V.
2017-10-01
Under operating conditions, suspension system performance changes which negatively affects vehicle stability and handling. The paper aims to simulate the impact of changes in suspension system performance on vehicle stability and handling. Methods. The paper describes monitoring of suspension system performance, testing of vehicle stability and handling, analyzes methods of suspension system performance monitoring under operating conditions. The mathematical model of a car movement along a circular path was developed. Mathematical tools describing a circular movement of a vehicle along a horizontal road were developed. Turning car movements were simulated. Calculation and experiment results were compared. Simulation proves the applicability of a mathematical model for assessment of the impact of suspension system performance on vehicle stability and handling.
Wang, Lu; Zeng, Shanshan; Chen, Teng; Qu, Haibin
2014-03-01
A promising process analytical technology (PAT) tool has been introduced for batch processes monitoring. Direct analysis in real time mass spectrometry (DART-MS), a means of rapid fingerprint analysis, was applied to a percolation process with multi-constituent substances for an anti-cancer botanical preparation. Fifteen batches were carried out, including ten normal operations and five abnormal batches with artificial variations. The obtained multivariate data were analyzed by a multi-way partial least squares (MPLS) model. Control trajectories were derived from eight normal batches, and the qualification was tested by R(2) and Q(2). Accuracy and diagnosis capability of the batch model were then validated by the remaining batches. Assisted with high performance liquid chromatography (HPLC) determination, process faults were explained by corresponding variable contributions. Furthermore, a batch level model was developed to compare and assess the model performance. The present study has demonstrated that DART-MS is very promising in process monitoring in botanical manufacturing. Compared with general PAT tools, DART-MS offers a particular account on effective compositions and can be potentially used to improve batch quality and process consistency of samples in complex matrices. Copyright © 2014 Elsevier B.V. All rights reserved.
Jeagle: a JAVA Runtime Verification Tool
NASA Technical Reports Server (NTRS)
DAmorim, Marcelo; Havelund, Klaus
2005-01-01
We introduce the temporal logic Jeagle and its supporting tool for runtime verification of Java programs. A monitor for an Jeagle formula checks if a finite trace of program events satisfies the formula. Jeagle is a programming oriented extension of the rule-based powerful Eagle logic that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace. Jeagle extends Eagle with constructs for capturing parameterized program events such as method calls and method returns. Parameters can be the objects that methods are called upon, arguments to methods, and return values. Jeagle allows one to refer to these in formulas. The tool performs automated program instrumentation using AspectJ. We show the transformational semantics of Jeagle.
McCarthy, K.
2006-01-01
Semipermeable membrane devices (SPMDs) were deployed at eight sites within the Buffalo Slough, near Portland, Oregon, to (1) measure the spatial and seasonal distribution of dissolved polycyclic aromatic hydrocarbon (PAH) and organochlorine (OC) compounds in the slough, (2) assess the usefulness of SPMDs as a tool for investigating and monitoring hydrophobic compounds throughout the Columbia Slough system, and (3) evaluate the utility of SPMDs as a tool for measuring the long-term effects of watershed improvement activities. Data from the SPMDs revealed clear spatial and seasonal differences in water quality within the slough and indicate that for hydrophobic compounds, this time-integrated passive-sampling technique is a useful tool for long-term watershed monitoring. In addition, the data suggest that a spiking rate of 2-5 ??g/SPMD of permeability/performance reference compounds, including at least one compound that is not susceptible to photodegradation, may be optimum for the conditions encountered here. ?? Springer Science + Business Media, Inc. 2006.
Leung, Alexander A; Keohane, Carol; Lipsitz, Stuart; Zimlichman, Eyal; Amato, Mary; Simon, Steven R; Coffey, Michael; Kaufman, Nathan; Cadet, Bismarck; Schiff, Gordon; Seger, Diane L; Bates, David W
2013-06-01
The Leapfrog CPOE evaluation tool has been promoted as a means of monitoring computerized physician order entry (CPOE). We sought to determine the relationship between Leapfrog scores and the rates of preventable adverse drug events (ADE) and potential ADE. A cross-sectional study of 1000 adult admissions in five community hospitals from October 1, 2008 to September 30, 2010 was performed. Observed rates of preventable ADE and potential ADE were compared with scores reported by the Leapfrog CPOE evaluation tool. The primary outcome was the rate of preventable ADE and the secondary outcome was the composite rate of preventable ADE and potential ADE. Leapfrog performance scores were highly related to the primary outcome. A 43% relative reduction in the rate of preventable ADE was predicted for every 5% increase in Leapfrog scores (rate ratio 0.57; 95% CI 0.37 to 0.88). In absolute terms, four fewer preventable ADE per 100 admissions were predicted for every 5% increase in overall Leapfrog scores (rate difference -4.2; 95% CI -7.4 to -1.1). A statistically significant relationship between Leapfrog scores and the secondary outcome, however, was not detected. Our findings support the use of the Leapfrog tool as a means of evaluating and monitoring CPOE performance after implementation, as addressed by current certification standards. Scores from the Leapfrog CPOE evaluation tool closely relate to actual rates of preventable ADE. Leapfrog testing may alert providers to potential vulnerabilities and highlight areas for further improvement.
NASA Technical Reports Server (NTRS)
2014-01-01
Topics covered include: Innovative Software Tools Measure Behavioral Alertness; Miniaturized, Portable Sensors Monitor Metabolic Health; Patient Simulators Train Emergency Caregivers; Solar Refrigerators Store Life-Saving Vaccines; Monitors Enable Medication Management in Patients' Homes; Handheld Diagnostic Device Delivers Quick Medical Readings; Experiments Result in Safer, Spin-Resistant Aircraft; Interfaces Visualize Data for Airline Safety, Efficiency; Data Mining Tools Make Flights Safer, More Efficient; NASA Standards Inform Comfortable Car Seats; Heat Shield Paves the Way for Commercial Space; Air Systems Provide Life Support to Miners; Coatings Preserve Metal, Stone, Tile, and Concrete; Robots Spur Software That Lends a Hand; Cloud-Based Data Sharing Connects Emergency Managers; Catalytic Converters Maintain Air Quality in Mines; NASA-Enhanced Water Bottles Filter Water on the Go; Brainwave Monitoring Software Improves Distracted Minds; Thermal Materials Protect Priceless, Personal Keepsakes; Home Air Purifiers Eradicate Harmful Pathogens; Thermal Materials Drive Professional Apparel Line; Radiant Barriers Save Energy in Buildings; Open Source Initiative Powers Real-Time Data Streams; Shuttle Engine Designs Revolutionize Solar Power; Procedure-Authoring Tool Improves Safety on Oil Rigs; Satellite Data Aid Monitoring of Nation's Forests; Mars Technologies Spawn Durable Wind Turbines; Programs Visualize Earth and Space for Interactive Education; Processor Units Reduce Satellite Construction Costs; Software Accelerates Computing Time for Complex Math; Simulation Tools Prevent Signal Interference on Spacecraft; Software Simplifies the Sharing of Numerical Models; Virtual Machine Language Controls Remote Devices; Micro-Accelerometers Monitor Equipment Health; Reactors Save Energy, Costs for Hydrogen Production; Cameras Monitor Spacecraft Integrity to Prevent Failures; Testing Devices Garner Data on Insulation Performance; Smart Sensors Gather Information for Machine Diagnostics; Oxygen Sensors Monitor Bioreactors and Ensure Health and Safety; Vision Algorithms Catch Defects in Screen Displays; and Deformable Mirrors Capture Exoplanet Data, Reflect Lasers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong
2016-01-01
Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time–frequency domains. The key features are selected based on Pearson’s Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL. PMID:27258277
Policy to Performance Toolkit: Transitioning Adults to Opportunity
ERIC Educational Resources Information Center
Alamprese, Judith A.; Limardo, Chrys
2012-01-01
The "Policy to Performance Toolkit" is designed to provide state adult education staff and key stakeholders with guidance and tools to use in developing, implementing, and monitoring state policies and their associated practices that support an effective state adult basic education (ABE) to postsecondary education and training transition…
Continuous Glucose Monitoring and Trend Accuracy
Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey
2014-01-01
Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437
Integrated health management and control of complex dynamical systems
NASA Astrophysics Data System (ADS)
Tolani, Devendra K.
2005-11-01
A comprehensive control and health management strategy for human-engineered complex dynamical systems is formulated for achieving high performance and reliability over a wide range of operation. Results from diverse research areas such as Probabilistic Robust Control (PRC), Damage Mitigating/Life Extending Control (DMC), Discrete Event Supervisory (DES) Control, Symbolic Time Series Analysis (STSA) and Health and Usage Monitoring System (HUMS) have been employed to achieve this goal. Continuous-domain control modules at the lower level are synthesized by PRC and DMC theories, whereas the upper-level supervision is based on DES control theory. In the PRC approach, by allowing different levels of risk under different flight conditions, the control system can achieve the desired trade off between stability robustness and nominal performance. In the DMC approach, component damage is incorporated in the control law to reduce the damage rate for enhanced structural durability. The DES controller monitors the system performance and, based on the mission requirements (e.g., performance metrics and level of damage mitigation), switches among various lower-level controllers. The core idea is to design a framework where the DES controller at the upper-level, mimics human intelligence and makes appropriate decisions to satisfy mission requirements, enhance system performance and structural durability. Recently developed tools in STSA have been used for anomaly detection and failure prognosis. The DMC deals with the usage monitoring or operational control part of health management, where as the issue of health monitoring is addressed by the anomaly detection tools. The proposed decision and control architecture has been validated on two test-beds, simulating the operations of rotorcraft dynamics and aircraft propulsion.
DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)
NASA Technical Reports Server (NTRS)
Keith, B.
1994-01-01
Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.
Evolution of a residue laboratory network and the management tools for monitoring its performance.
Lins, E S; Conceição, E S; Mauricio, A De Q
2012-01-01
Since 2005 the National Residue & Contaminants Control Plan (NRCCP) in Brazil has been considerably enhanced, increasing the number of samples, substances and species monitored, and also the analytical detection capability. The Brazilian laboratory network was forced to improve its quality standards in order to comply with the NRCP's own evolution. Many aspects such as the limits of quantification (LOQs), the quality management systems within the laboratories and appropriate method validation are in continuous improvement, generating new scenarios and demands. Thus, efficient management mechanisms for monitoring network performance and its adherence to the established goals and guidelines are required. Performance indicators associated to computerised information systems arise as a powerful tool to monitor the laboratories' activity, making use of different parameters to describe this activity on a day-to-day basis. One of these parameters is related to turnaround times, and this factor is highly affected by the way each laboratory organises its management system, as well as the regulatory requirements. In this paper a global view is presented of the turnaround times related to the type of analysis, laboratory, number of samples per year, type of matrix, country region and period of the year, all these data being collected from a computerised system called SISRES. This information gives a solid background to management measures aiming at the improvement of the service offered by the laboratory network.
Horn, Jacqueline; Friess, Wolfgang
2018-01-01
The collapse temperature (Tc) and the glass transition temperature of freeze-concentrated solutions (Tg') as well as the crystallization behavior of excipients are important physicochemical characteristics which guide the cycle development in freeze-drying. The most frequently used methods to determine these values are differential scanning calorimetry (DSC) and freeze-drying microscopy (FDM). The objective of this study was to evaluate the optical fiber system (OFS) unit as alternative tool for the analysis of Tc, Tg' and crystallization events. The OFS unit was also tested as a potential online monitoring tool during freeze-drying. Freeze/thawing and freeze-drying experiments of sucrose, trehalose, stachyose, mannitol, and highly concentrated IgG1 and lysozyme solutions were carried out and monitored by the OFS. Comparative analyses were performed by DSC and FDM. OFS and FDM results correlated well. The crystallization behavior of mannitol could be monitored by the OFS during freeze/thawing as it can be done by DSC. Online monitoring of freeze-drying runs detected collapse of amorphous saccharide matrices. The OFS unit enabled the analysis of both Tc and crystallization processes, which is usually carried out by FDM and DSC. The OFS can hence be used as novel measuring device. Additionally, detection of these events during lyophilization facilitates online-monitoring. Thus the OFS is a new beneficial tool for the development and monitoring of freeze-drying processes. PMID:29435445
NASA Astrophysics Data System (ADS)
Horn, Jacqueline; Friess, Wolfgang
2018-01-01
The collapse temperature (Tc) and the glass transition temperature of freeze-concentrated solutions (Tg’) as well as the crystallization behavior of excipients are important physicochemical characteristics which guide the cycle development in freeze-drying. The most frequently used methods to determine these values are differential scanning calorimetry (DSC) and freeze-drying microscopy (FDM). The objective of this study was to evaluate the optical fiber system (OFS) unit as alternative tool for the analysis of Tc, Tg’ and crystallization events. The OFS unit was also tested as a potential online monitoring tool during freeze-drying. Freeze/thawing and freeze-drying experiments of sucrose, trehalose, stachyose, mannitol and highly concentrated IgG1 and lysozyme solutions were carried out and monitored by the OFS. Comparative analyses were performed by DSC and FDM. OFS and FDM results correlated well. The crystallization behavior of mannitol could be monitored by the OFS during freeze/thawing as it can be done by DSC. Online monitoring of freeze-drying runs detected collapse of amorphous saccharide matrices. The OFS unit enabled the analysis of both Tc and crystallization processes, which is usually carried out by FDM and DSC. The OFS can hence be used as novel measuring device. Additionally, detection of these events during lyophilization facilitate online-monitoring. Thus the OFS is a new beneficial tool for the development and monitoring of freeze-drying processes.
Grid Stability Awareness System (GSAS) Final Scientific/Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feuerborn, Scott; Ma, Jian; Black, Clifton
The project team developed a software suite named Grid Stability Awareness System (GSAS) for power system near real-time stability monitoring and analysis based on synchrophasor measurement. The software suite consists of five analytical tools: an oscillation monitoring tool, a voltage stability monitoring tool, a transient instability monitoring tool, an angle difference monitoring tool, and an event detection tool. These tools have been integrated into one framework to provide power grid operators with both real-time or near real-time stability status of a power grid and historical information about system stability status. These tools are being considered for real-time use in themore » operation environment.« less
Computer implemented method, and apparatus for controlling a hand-held tool
NASA Technical Reports Server (NTRS)
Wagner, Kenneth William (Inventor); Taylor, James Clayton (Inventor)
1999-01-01
The invention described here in is a computer-implemented method and apparatus for controlling a hand-held tool. In particular, the control of a hand held tool is for the purpose of controlling the speed of a fastener interface mechanism and the torque applied to fasteners by the fastener interface mechanism of the hand-held tool and monitoring the operating parameters of the tool. The control is embodied in intool software embedded on a processor within the tool which also communicates with remote software. An operator can run the tool, or through the interaction of both software, operate the tool from a remote location, analyze data from a performance history recorded by the tool, and select various torque and speed parameters for each fastener.
CSHM: Web-based safety and health monitoring system for construction management.
Cheung, Sai On; Cheung, Kevin K W; Suen, Henry C H
2004-01-01
This paper describes a web-based system for monitoring and assessing construction safety and health performance, entitled the Construction Safety and Health Monitoring (CSHM) system. The design and development of CSHM is an integration of internet and database systems, with the intent to create a total automated safety and health management tool. A list of safety and health performance parameters was devised for the management of safety and health in construction. A conceptual framework of the four key components of CSHM is presented: (a) Web-based Interface (templates); (b) Knowledge Base; (c) Output Data; and (d) Benchmark Group. The combined effect of these components results in a system that enables speedy performance assessment of safety and health activities on construction sites. With the CSHM's built-in functions, important management decisions can theoretically be made and corrective actions can be taken before potential hazards turn into fatal or injurious occupational accidents. As such, the CSHM system will accelerate the monitoring and assessing of performance safety and health management tasks.
Performance measurement: A tool for program control
NASA Technical Reports Server (NTRS)
Abell, Nancy
1994-01-01
Performance measurement is a management tool for planning, monitoring, and controlling as aspects of program and project management--cost, schedule, and technical requirements. It is a means (concept and approach) to a desired end (effective program planning and control). To reach the desired end, however, performance measurement must be applied and used appropriately, with full knowledge and recognition of its power and of its limitations--what it can and cannot do for the project manager. What is the potential of this management tool? What does performance measurement do that a traditional plan vs. actual technique cannot do? Performance measurement provides an improvement over the customary comparison of how much money was spent (actual cost) vs. how much was planned to be spent based on a schedule of activities (work planned). This commonly used plan vs. actual comparison does not allow one to know from the numerical data if the actual cost incurred was for work intended to be done.
Mutant KRAS Circulating Tumor DNA Is an Accurate Tool for Pancreatic Cancer Monitoring.
Perets, Ruth; Greenberg, Orli; Shentzer, Talia; Semenisty, Valeria; Epelbaum, Ron; Bick, Tova; Sarji, Shada; Ben-Izhak, Ofer; Sabo, Edmond; Hershkovitz, Dov
2018-05-01
Many new pancreatic cancer treatment combinations have been discovered in recent years, yet the prognosis of pancreatic ductal adenocarcinoma (PDAC) remains grim. The advent of new treatments highlights the need for better monitoring tools for treatment response, to allow a timely switch between different therapeutic regimens. Circulating tumor DNA (ctDNA) is a tool for cancer detection and characterization with growing clinical use. However, currently, ctDNA is not used for monitoring treatment response. The high prevalence of KRAS hotspot mutations in PDAC suggests that mutant KRAS can be an efficient ctDNA marker for PDAC monitoring. Seventeen metastatic PDAC patients were recruited and serial plasma samples were collected. CtDNA was extracted from the plasma, and KRAS mutation analysis was performed using next-generation sequencing and correlated with serum CA19-9 levels, imaging, and survival. Plasma KRAS mutations were detected in 5/17 (29.4%) patients. KRAS ctDNA detection was associated with shorter survival (8 vs. 37.5 months). Our results show that, in ctDNA positive patients, ctDNA is at least comparable to CA19-9 as a marker for monitoring treatment response. Furthermore, the rate of ctDNA change was inversely correlated with survival. Our results confirm that mutant KRAS ctDNA detection in metastatic PDAC patients is a poor prognostic marker. Additionally, we were able to show that mutant KRAS ctDNA analysis can be used to monitor treatment response in PDAC patients and that ctDNA dynamics is associated with survival. We suggest that ctDNA analysis in metastatic PDAC patients is a readily available tool for disease monitoring. Avoiding futile chemotherapy in metastatic pancreatic ductal adenocarcinoma (PDAC) patients by monitoring response to treatment is of utmost importance. A novel biomarker for monitoring treatment response in PDAC, using mutant KRAS circulating tumor DNA (ctDNA), is proposed. Results, although limited by small sample numbers, suggest that ctDNA can be an effective marker for disease monitoring and that ctDNA level over time is a better predictor of survival than the dynamics of the commonly used biomarker CA19-9. Therefore, ctDNA analysis can be a useful tool for monitoring PDAC treatment response. These results should be further validated in larger sample numbers. © AlphaMed Press 2018.
Wear and breakage monitoring of cutting tools by an optical method: theory
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Yongqing; Chen, Fangrong; Tian, Zhiren; Wang, Yao
1996-10-01
An essential part of a machining system in the unmanned flexible manufacturing system, is the ability to automatically change out tools that are worn or damaged. An optoelectronic method for in situ monitoring of the flank wear and breakage of cutting tools is presented. A flank wear estimation system is implemented in a laboratory environment, and its performance is evaluated through turning experiments. The flank wear model parameters that need to be known a priori are determined through several preliminary experiments, or from data available in the literature. The resulting cutting conditions are typical of those used in finishing cutting operations. Through time and amplitude domain analysis of the cutting tool wear states and breakage states, it is found that the original signal digital specificity (sigma) 2x and the self correlation coefficient (rho) (m) can reflect the change regularity of the cutting tool wear and break are determined, but which is not enough due to the complexity of the wear and break procedure of cutting tools. Time series analysis and frequency spectrum analysis will be carried out, which will be described in the later papers.
Travel reliability inventory for Chicago.
DOT National Transportation Integrated Search
2013-04-01
The overarching goal of this research project is to enable state DOTs to document and monitor the reliability performance : of their highway networks. To this end, a computer tool, TRIC, was developed to produce travel reliability inventories from : ...
Development of transportation asset management decision support tools : final report.
DOT National Transportation Integrated Search
2017-08-09
This study developed a web-based prototype decision support platform to demonstrate the benefits of transportation asset management in monitoring asset performance, supporting asset funding decisions, planning budget tradeoffs, and optimizing resourc...
Investigation of Tapered Roller Bearing Damage Detection Using Oil Debris Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Krieder, Gary; Fichter, Thomas
2006-01-01
A diagnostic tool was developed for detecting fatigue damage to tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. This diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests performed by The Timken Company in their Tapered Roller Bearing Health Monitoring Test Rig. Failure progression tests were performed under simulated engine load conditions. Tests were performed on one healthy bearing and three predamaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of debris generated during failure of the bearing. The bearing was removed periodically for inspection throughout the failure progression tests. Results indicate the accumulated oil debris mass is a good predictor of damage on tapered roller bearings. The use of a fuzzy logic model to enable an easily interpreted diagnostic metric was proposed and demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
ERIC Educational Resources Information Center
Mahu, Robert J.
2017-01-01
Performance measurement has emerged as a management tool that, accompanied by advances in technology and data analysis, has allowed public officials to control public policy at multiple levels of government. In the United States, the federal government has used performance measurement as part of an accountability strategy that enables Congress and…
Persons, Jacqueline B.; Koerner, Kelly; Eidelman, Polina; Thomas, Cannon; Liu, Howard
2015-01-01
Evidence-based practices (EBPs) reach consumers slowly because practitioners are slow to adopt and implement them. We hypothesized that giving psychotherapists a tool + training intervention that was designed to help the therapist integrate the EBP of progress monitoring into his or her usual way of working would be associated with adoption and sustained implementation of the particular progress monitoring tool we trained them to use (the Depression Anxiety Stress Scales on our Online Progress Tracking tool) and would generalize to all types of progress monitoring measures. To test these hypotheses, we developed an online progress monitoring tool and a course that trained psychotherapists to use it, and we assessed progress monitoring behavior in 26 psychotherapists before, during, immediately after, and 12 months after they received the tool and training. Immediately after receiving the tool + training intervention, participants showed statistically significant increases in use of the online tool and of all types of progress monitoring measures. Twelve months later, participants showed sustained use of any type of progress monitoring measure but not the online tool. PMID:26618237
Persons, Jacqueline B; Koerner, Kelly; Eidelman, Polina; Thomas, Cannon; Liu, Howard
2016-01-01
Evidence-based practices (EBPs) reach consumers slowly because practitioners are slow to adopt and implement them. We hypothesized that giving psychotherapists a tool + training intervention that was designed to help the therapist integrate the EBP of progress monitoring into his or her usual way of working would be associated with adoption and sustained implementation of the particular progress monitoring tool we trained them to use (the Depression Anxiety Stress Scales on our Online Progress Tracking tool) and would generalize to all types of progress monitoring measures. To test these hypotheses, we developed an online progress monitoring tool and a course that trained psychotherapists to use it, and we assessed progress monitoring behavior in 26 psychotherapists before, during, immediately after, and 12 months after they received the tool and training. Immediately after receiving the tool + training intervention, participants showed statistically significant increases in use of the online tool and of all types of progress monitoring measures. Twelve months later, participants showed sustained use of any type of progress monitoring measure but not the online tool. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hart, D. M.; Merchant, B. J.; Abbott, R. E.
2012-12-01
The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.
SU-F-P-04: Implementation of Dose Monitoring Software: Successes and Pitfalls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Och, J
2016-06-15
Purpose: to successfully install a dose monitoring software (DMS) application to assist in CT protocol and dose management. Methods: Upon selecting the DMS, we began our implementation of the application. A working group composed of Medical Physics, Radiology Administration, Information Technology, and CT technologists was formed. On-site training in the application was supplied by the vendor. The decision was made to apply the process for all the CT protocols on all platforms at all facilities. Protocols were painstakingly mapped to the correct masters, and the system went ‘live’. Results: We are routinely using DMS as a tool in our Clinicalmore » Performance CT QA program. It is useful in determining the effectiveness of revisions to existing protocols, and establishing performance baselines for new units. However, the implementation was not without difficulty. We identified several pitfalls and obstacles which frustrated progress. Including: Training deficiencies, Nomenclature problems, Communication, DICOM variability. Conclusion: Dose monitoring software can be a potent tool for QA. However, implementation of the program can be problematic and requires planning, organization and commitment.« less
Continued Development of Expert System Tools for NPSS Engine Diagnostics
NASA Technical Reports Server (NTRS)
Lewandowski, Henry
1996-01-01
The objectives of this grant were to work with previously developed NPSS (Numerical Propulsion System Simulation) tools and enhance their functionality; explore similar AI systems; and work with the High Performance Computing Communication (HPCC) K-12 program. Activities for this reporting period are briefly summarized and a paper addressing the implementation, monitoring and zooming in a distributed jet engine simulation is included as an attachment.
Villas-Boas, Mariana D; Olivera, Francisco; de Azevedo, Jose Paulo S
2017-09-01
Water quality monitoring is a complex issue that requires support tools in order to provide information for water resource management. Budget constraints as well as an inadequate water quality network design call for the development of evaluation tools to provide efficient water quality monitoring. For this purpose, a nonlinear principal component analysis (NLPCA) based on an autoassociative neural network was performed to assess the redundancy of the parameters and monitoring locations of the water quality network in the Piabanha River watershed. Oftentimes, a small number of variables contain the most relevant information, while the others add little or no interpretation to the variability of water quality. Principal component analysis (PCA) is widely used for this purpose. However, conventional PCA is not able to capture the nonlinearities of water quality data, while neural networks can represent those nonlinear relationships. The results presented in this work demonstrate that NLPCA performs better than PCA in the reconstruction of the water quality data of Piabanha watershed, explaining most of data variance. From the results of NLPCA, the most relevant water quality parameter is fecal coliforms (FCs) and the least relevant is chemical oxygen demand (COD). Regarding the monitoring locations, the most relevant is Poço Tarzan (PT) and the least is Parque Petrópolis (PP).
Time series analysis of tool wear in sheet metal stamping using acoustic emission
NASA Astrophysics Data System (ADS)
Vignesh Shanbhag, V.; Pereira, P. Michael; Rolfe, F. Bernard; Arunachalam, N.
2017-09-01
Galling is an adhesive wear mode that often affects the lifespan of stamping tools. Since stamping tools represent significant economic cost, even a slight improvement in maintenance cost is of high importance for the stamping industry. In other manufacturing industries, online tool condition monitoring has been used to prevent tool wear-related failure. However, monitoring the acoustic emission signal from a stamping process is a non-trivial task since the acoustic emission signal is non-stationary and non-transient. There have been numerous studies examining acoustic emissions in sheet metal stamping. However, very few have focused in detail on how the signals change as wear on the tool surface progresses prior to failure. In this study, time domain analysis was applied to the acoustic emission signals to extract features related to tool wear. To understand the wear progression, accelerated stamping tests were performed using a semi-industrial stamping setup which can perform clamping, piercing, stamping in a single cycle. The time domain features related to stamping were computed for the acoustic emissions signal of each part. The sidewalls of the stamped parts were scanned using an optical profilometer to obtain profiles of the worn part, and they were qualitatively correlated to that of the acoustic emissions signal. Based on the wear behaviour, the wear data can be divided into three stages: - In the first stage, no wear is observed, in the second stage, adhesive wear is likely to occur, and in the third stage severe abrasive plus adhesive wear is likely to occur. Scanning electron microscopy showed the formation of lumps on the stamping tool, which represents galling behavior. Correlation between the time domain features of the acoustic emissions signal and the wear progression identified in this study lays the basis for tool diagnostics in stamping industry.
Subsidence monitoring system for offshore applications: technology scouting and feasibility studies
NASA Astrophysics Data System (ADS)
Miandro, R.; Dacome, C.; Mosconi, A.; Roncari, G.
2015-11-01
Because of concern about possible impacts of hydrocarbon production activities on coastal-area environments and infrastructures, new hydrocarbon offshore development projects in Italy must submit a monitoring plan to Italian authorities to measure and analyse real-time subsidence evolution. The general geological context, where the main offshore Adriatic fields are located, is represented by young unconsolidated terrigenous sediments. In such geological environments, sea floor subsidence, caused by hydrocarbon extraction, is quite probable. Though many tools are available for subsidence monitoring onshore, few are available for offshore monitoring. To fill the gap ENI (Ente Nazionale Idrocarburi) started a research program, principally in collaboration with three companies, to generate a monitoring system tool to measure seafloor subsidence. The tool, according to ENI design technical-specification, would be a robust long pipeline or cable, with a variable or constant outside diameter (less than or equal to 100 mm) and interval spaced measuring points. The design specifications for the first prototype were: to detect 1 mm altitude variation, to work up to 100 m water depth and investigation length of 3 km. Advanced feasibility studies have been carried out with: Fugro Geoservices B.V. (Netherlands), D'Appolonia (Italy), Agisco (Italy). Five design (using three fundamental measurements concepts and five measurement tools) were explored: cable shape changes measured by cable strain using fiber optics (Fugro); cable inclination measured using tiltmeters (D'Appolonia) and measured using fiber optics (Fugro); and internal cable altitude-dependent pressure changes measured using fiber optics (Fugro) and measured using pressure transducers at discrete intervals along the hydraulic system (Agisco). Each design tool was analysed and a rank ordering of preferences was performed. The third method (measurement of pressure changes), with the solution proposed by Agisco, was deemed most feasible. Agisco is building the first prototype of the tool to be installed in an offshore field in the next few years. This paper describes design of instruments from the three companies to satisfy the design specification.
Progress in the development and integration of fluid flow control tools in paper microfluidics.
Fu, Elain; Downs, Corey
2017-02-14
Paper microfluidics is a rapidly growing subfield of microfluidics in which paper-like porous materials are used to create analytical devices. There is a need for higher performance field-use tests for many application domains including human disease diagnosis, environmental monitoring, and veterinary medicine. A key factor in creating high performance paper-based devices is the ability to manipulate fluid flow within the devices. This critical review is focused on the progress that has been made in (i) the development of fluid flow control tools and (ii) the integration of those tools into paper microfluidic devices. Further, we strive to be comprehensive in our presentation and provide historical context through discussion and performance comparisons, when possible, of both relevant earlier work and recent work. Finally, we discuss the major areas of focus for fluid flow methods development to advance the potential of paper microfluidics for high-performance field applications.
Helmet-Cam: tool for assessing miners’ respirable dust exposure
Cecala, A.B.; Reed, W.R.; Joy, G.J.; Westmoreland, S.C.; O’Brien, A.D.
2015-01-01
Video technology coupled with datalogging exposure monitors have been used to evaluate worker exposure to different types of contaminants. However, previous application of this technology used a stationary video camera to record the worker’s activity while the worker wore some type of contaminant monitor. These techniques are not applicable to mobile workers in the mining industry because of their need to move around the operation while performing their duties. The Helmet-Cam is a recently developed exposure assessment tool that integrates a person-wearable video recorder with a datalogging dust monitor. These are worn by the miner in a backpack, safety belt or safety vest to identify areas or job tasks of elevated exposure. After a miner performs his or her job while wearing the unit, the video and dust exposure data files are downloaded to a computer and then merged together through a NIOSH-developed computer software program called Enhanced Video Analysis of Dust Exposure (EVADE). By providing synchronized playback of the merged video footage and dust exposure data, the EVADE software allows for the assessment and identification of key work areas and processes, as well as work tasks that significantly impact a worker’s personal respirable dust exposure. The Helmet-Cam technology has been tested at a number of metal/nonmetal mining operations and has proven to be a valuable assessment tool. Mining companies wishing to use this technique can purchase a commercially available video camera and an instantaneous dust monitor to obtain the necessary data, and the NIOSH-developed EVADE software will be available for download at no cost on the NIOSH website. PMID:26380529
NASA Astrophysics Data System (ADS)
Alammar, Montaha; Austin, William
2017-04-01
The present study represents an attempt to evaluate the impacts of marine aquaculture on benthic foraminiferal communities in order to develop an improved, quantitative understanding of their response to the variation in benthic environmental gradients associated with fish farms in Scotland. Furthermore, their performance as a bio-monitoring tool will be discussed and outlined in ongoing research to evaluate their performance alongside traditional bioecological indicators. Foraminiferal faunas offer the potential to assess ecological quality status through their response to stress gradients (e.g. organic matter enrichment), such as that caused by intensive fish farming in coastal sediments. In this study, we followed the Foraminiferal Bio-Monitoring (FOBIMO) protocol (Schönfeld. et al., 2012), which proposed a standardised methodology of using foraminifera as a bio-monitoring tool to evaluate the quality of the marine ecosystem and applied these protocols to the rapidly expanding marine aquaculture sector in Scotland, UK. Eight stations were sampled along a transect in Loch Creran, west coast of Scotland, to describe the spatial and down-core (temporal) distribution pattern of benthic foraminiferal assemblages. Triplicate, Rose-Bengal stained samples from an interval of (0-1cm) below the sediment surface were studied at each station from below the fish cages (impacted stations) to a distance from the farming sites (control stations). Morphospecies counts were conducted, and the organic carbon and the grain size distributions determined. Species richness beneath these fish farming cages were analysed and showed a reduction of foraminifera density and diversity at the impacted stations.
Developments in seismic monitoring for risk reduction
Celebi, M.
2007-01-01
This paper presents recent state-of-the-art developments to obtain displacements and drift ratios for seismic monitoring and damage assessment of buildings. In most cases, decisions on safety of buildings following seismic events are based on visual inspections of the structures. Real-time instrumental measurements using GPS or double integration of accelerations, however, offer a viable alternative. Relevant parameters, such as the type of connections and structural characteristics (including storey geometry), can be estimated to compute drifts corresponding to several pre-selected threshold stages of damage. Drift ratios determined from real-time monitoring can then be compared to these thresholds in order to estimate damage conditions drift ratios. This approach is demonstrated in three steel frame buildings in San Francisco, California. Recently recorded data of strong shaking from these buildings indicate that the monitoring system can be a useful tool in rapid assessment of buildings and other structures following an earthquake. Such systems can also be used for risk monitoring, as a method to assess performance-based design and analysis procedures, for long-term assessment of structural characteristics of a building, and as a possible long-term damage detection tool.
Sloane, E B; Gelhot, V
2004-01-01
This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected.
Ochagavía, A; Baigorri, F; Mesquida, J; Ayuela, J M; Ferrándiz, A; García, X; Monge, M I; Mateu, L; Sabatier, C; Clau-Terré, F; Vicho, R; Zapata, L; Maynar, J; Gil, A
2014-04-01
Hemodynamic monitoring offers valuable information on cardiovascular performance in the critically ill, and has become a fundamental tool in the diagnostic approach and in the therapy guidance of those patients presenting with tissue hypoperfusion. From introduction of the pulmonary artery catheter to the latest less invasive technologies, hemodynamic monitoring has been surrounded by many questions regarding its usefulness and its ultimate impact on patient prognosis. The Cardiological Intensive Care and CPR Working Group (GTCIC-RCP) of the Spanish Society of Intensive Care and Coronary Units (SEMICYUC) has recently impulsed the development of an updating series in hemodynamic monitoring. Now, a final series of recommendations are presented in order to analyze essential issues in hemodynamics, with the purpose of becoming a useful tool for residents and critical care practitioners involved in the daily management of critically ill patients. Copyright © 2013 Elsevier España, S.L. and SEMICYUC. All rights reserved.
Real-time video quality monitoring
NASA Astrophysics Data System (ADS)
Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey
2011-12-01
The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.
Remote console for virtual telerehabilitation.
Lewis, Jeffrey A; Boian, Rares F; Burdea, Grigore; Deutsch, Judith E
2005-01-01
The Remote Console (ReCon) telerehabilitation system provides a platform for therapists to guide rehabilitation sessions from a remote location. The ReCon system integrates real-time graphics, audio/video communication, private therapist chat, post-test data graphs, extendable patient and exercise performance monitoring, exercise pre-configuration and modification under a single application. These tools give therapists the ability to conduct training, monitoring/assessment, and therapeutic intervention remotely and in real-time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spangler, Lee; Cunningham, Alfred; Lageson, David
2011-03-31
ZERT has made major contributions to five main areas of sequestration science: improvement of computational tools; measurement and monitoring techniques to verify storage and track migration of CO{sub 2}; development of a comprehensive performance and risk assessment framework; fundamental geophysical, geochemical and hydrological investigations of CO{sub 2} storage; and investigate innovative, bio-based mitigation strategies.
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
Perspectives on Wellness Self-Monitoring Tools for Older Adults
Huh, Jina; Le, Thai; Reeder, Blaine; Thompson, Hilaire J.; Demiris, George
2013-01-01
Purpose Our purpose was to understand different stakeholder perceptions about the use of self-monitoring tools, specifically in the area of older adults’ personal wellness. In conjunction with the advent of personal health records, tracking personal health using self-monitoring technologies shows promising patient support opportunities. While clinicians’ tools for monitoring of older adults have been explored, we know little about how older adults may self-monitor their wellness and health and how their health care providers would perceive such use. Methods We conducted three focus groups with health care providers (n=10) and four focus groups with community-dwelling older adults (n=31). Results Older adult participants’ found the concept of self-monitoring unfamiliar and this influenced a narrowed interest in the use of wellness self-monitoring tools. On the other hand, health care provider participants showed open attitudes towards wellness monitoring tools for older adults and brainstormed about various stakeholders’ use cases. The two participant groups showed diverging perceptions in terms of: perceived uses, stakeholder interests, information ownership and control, and sharing of wellness monitoring tools. Conclusions Our paper provides implications and solutions for how older adults’ wellness self-monitoring tools can enhance patient-health care provider interaction, patient education, and improvement in overall wellness. PMID:24041452
Perspectives on wellness self-monitoring tools for older adults.
Huh, Jina; Le, Thai; Reeder, Blaine; Thompson, Hilaire J; Demiris, George
2013-11-01
Our purpose was to understand different stakeholder perceptions about the use of self-monitoring tools, specifically in the area of older adults' personal wellness. In conjunction with the advent of personal health records, tracking personal health using self-monitoring technologies shows promising patient support opportunities. While clinicians' tools for monitoring of older adults have been explored, we know little about how older adults may self-monitor their wellness and health and how their health care providers would perceive such use. We conducted three focus groups with health care providers (n=10) and four focus groups with community-dwelling older adults (n=31). Older adult participants' found the concept of self-monitoring unfamiliar and this influenced a narrowed interest in the use of wellness self-monitoring tools. On the other hand, health care provider participants showed open attitudes toward wellness monitoring tools for older adults and brainstormed about various stakeholders' use cases. The two participant groups showed diverging perceptions in terms of: perceived uses, stakeholder interests, information ownership and control, and sharing of wellness monitoring tools. Our paper provides implications and solutions for how older adults' wellness self-monitoring tools can enhance patient-health care provider interaction, patient education, and improvement in overall wellness. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Graphical Contingency Analysis for the Nation's Electric Grid
Zhenyu (Henry) Huang
2017-12-09
PNNL has developed a new tool to manage the electric grid more effectively, helping prevent blackouts and brownouts--and possibly avoiding millions of dollars in fines for system violations. The Graphical Contingency Analysis tool monitors grid performance, shows prioritized lists of problems, provides visualizations of potential consequences, and helps operators identify the most effective courses of action. This technology yields faster, better decisions and a more stable and reliable power grid.
AE Monitoring of Diamond Turned Rapidly Soldified Aluminium 443
NASA Astrophysics Data System (ADS)
Onwuka, G.; Abou-El-Hossein, K.; Mkoko, Z.
2017-05-01
The fast replacement of conventional aluminium with rapidly solidified aluminium alloys has become a noticeable trend in the current manufacturing industries involved in the production of optics and optical molding inserts. This is as a result of the improved performance and durability of rapidly solidified aluminium alloys when compared to conventional aluminium. Melt spinning process is vital for manufacturing rapidly solidified aluminium alloys like RSA 905, RSA 6061 and RSA 443 which are common in the industries today. RSA 443 is a newly developed alloy with few research findings and huge research potential. There is no available literature focused on monitoring the machining of RSA 443 alloys. In this research, Acoustic Emission sensing technique was applied to monitor the single point diamond turning of RSA 443 on an ultrahigh precision lathe machine. The machining process was carried out after careful selection of feed, speed and depths of cut. The monitoring process was achieved with a high sampling data acquisition system using different tools while concurrent measurement of the surface roughness and tool wear were initiated after covering a total feed distance of 13km. An increasing trend of raw AE spikes and peak to peak signal were observed with an increase in the surface roughness and tool wear values. Hence, acoustic emission sensing technique proves to be an effective monitoring method for the machining of RSA 443 alloy.
Baby-MONITOR: A Composite Indicator of NICU Quality
Kowalkowski, Marc A.; Zupancic, John A. F.; Pietz, Kenneth; Richardson, Peter; Draper, David; Hysong, Sylvia J.; Thomas, Eric J.; Petersen, Laura A.; Gould, Jeffrey B.
2014-01-01
BACKGROUND AND OBJECTIVES: NICUs vary in the quality of care delivered to very low birth weight (VLBW) infants. NICU performance on 1 measure of quality only modestly predicts performance on others. Composite measurement of quality of care delivery may provide a more comprehensive assessment of quality. The objective of our study was to develop a robust composite indicator of quality of NICU care provided to VLBW infants that accurately discriminates performance among NICUs. METHODS: We developed a composite indicator, Baby-MONITOR, based on 9 measures of quality chosen by a panel of experts. Measures were standardized, equally weighted, and averaged. We used the California Perinatal Quality Care Collaborative database to perform across-sectional analysis of care given to VLBW infants between 2004 and 2010. Performance on the Baby-MONITOR is not an absolute marker of quality but indicates overall performance relative to that of the other NICUs. We used sensitivity analyses to assess the robustness of the composite indicator, by varying assumptions and methods. RESULTS: Our sample included 9023 VLBW infants in 22 California regional NICUs. We found significant variations within and between NICUs on measured components of the Baby-MONITOR. Risk-adjusted composite scores discriminated performance among this sample of NICUs. Sensitivity analysis that included different approaches to normalization, weighting, and aggregation of individual measures showed the Baby-MONITOR to be robust (r = 0.89–0.99). CONCLUSIONS: The Baby-MONITOR may be a useful tool to comprehensively assess the quality of care delivered by NICUs. PMID:24918221
LEMON - LHC Era Monitoring for Large-Scale Infrastructures
NASA Astrophysics Data System (ADS)
Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron
2011-12-01
At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.
Tools for automated acoustic monitoring within the R package monitoR
Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese
2016-01-01
The R package monitoR contains tools for managing an acoustic-monitoring program including survey metadata, template creation and manipulation, automated detection and results management. These tools are scalable for use with small projects as well as larger long-term projects and those with expansive spatial extents. Here, we describe typical workflow when using the tools in monitoR. Typical workflow utilizes a generic sequence of functions, with the option for either binary point matching or spectrogram cross-correlation detectors.
ERIC Educational Resources Information Center
Zantal-Wiener, Kathy; Horwood, Thomas J.
2010-01-01
The authors propose a comprehensive evaluation framework to prepare for evaluating school emergency management programs. This framework involves a logic model that incorporates Government Performance and Results Act (GPRA) measures as a foundation for comprehensive evaluation that complements performance monitoring used by the U.S. Department of…
Telecommunications end-to-end systems monitoring on TOPEX/Poseidon: Tools and techniques
NASA Technical Reports Server (NTRS)
Calanche, Bruno J.
1994-01-01
The TOPEX/Poseidon Project Satellite Performance Analysis Team's (SPAT) roles and responsibilities have grown to include functions that are typically performed by other teams on JPL Flight Projects. In particular, SPAT Telecommunication's role has expanded beyond the nominal function of monitoring, assessing, characterizing, and trending the spacecraft (S/C) RF/Telecom subsystem to one of End-to-End Information Systems (EEIS) monitoring. This has been accomplished by taking advantage of the spacecraft and ground data system structures and protocols. By processing both the received spacecraft telemetry minor frame ground generated CRC flags and NASCOM block poly error flags, bit error rates (BER) for each link segment can be determined. This provides the capability to characterize the separate link segments, determine science data recovery, and perform fault/anomaly detection and isolation. By monitoring and managing the links, TOPEX has successfully recovered approximately 99.9 percent of the science data with an integrity (BER) of better than 1 x 10(exp 8). This paper presents the algorithms used to process the above flags and the techniques used for EEIS monitoring.
Evaluating the Fraser Health Balanced Scorecard--a formative evaluation.
Barnardo, Catherine; Jivanni, Amin
2009-01-01
Fraser Health (FH), a large, Canadian, integrated health care network, adopted the Balanced Scorecard (BSC) approach to monitor organizational performance in 2006. This paper reports on the results of a formative evaluation, conducted in April, 2008, to assess the usefulness of the BSC as a performance-reporting system and a performance management tool. Results indicated that the BSC has proven to be useful for reporting performance but is not currently used for performance management in a substantial way.
Magnetic Resonance Imaging of Gel-cast Ceramic Composites
DOE R&D Accomplishments Database
Dieckman, S. L.; Balss, K. M.; Waterfield, L. G.; Jendrzejczyk, J. A.; Raptis, A. C.
1997-01-16
Magnetic resonance imaging (MRI) techniques are being employed to aid in the development of advanced near-net-shape gel-cast ceramic composites. MRI is a unique nondestructive evaluation tool that provides information on both the chemical and physical properties of materials. In this effort, MRI imaging was performed to monitor the drying of porous green-state alumina - methacrylamide-N.N`-methylene bisacrylamide (MAM-MBAM) polymerized composite specimens. Studies were performed on several specimens as a function of humidity and time. The mass and shrinkage of the specimens were also monitored and correlated with the water content.
Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology
NASA Astrophysics Data System (ADS)
Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya
2017-09-01
Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.
Rallis, Austin; Fercho, Kelene A; Bosch, Taylor J; Baugh, Lee A
2018-01-31
Tool use is associated with three visual streams-dorso-dorsal, ventro-dorsal, and ventral visual streams. These streams are involved in processing online motor planning, action semantics, and tool semantics features, respectively. Little is known about the way in which the brain represents virtual tools. To directly assess this question, a virtual tool paradigm was created that provided the ability to manipulate tool components in isolation of one another. During functional magnetic resonance imaging (fMRI), adult participants performed a series of virtual tool manipulation tasks in which vision and movement kinematics of the tool were manipulated. Reaction time and hand movement direction were monitored while the tasks were performed. Functional imaging revealed that activity within all three visual streams was present, in a similar pattern to what would be expected with physical tool use. However, a previously unreported network of right-hemisphere activity was found including right inferior parietal lobule, middle and superior temporal gyri and supramarginal gyrus - regions well known to be associated with tool processing within the left hemisphere. These results provide evidence that both virtual and physical tools are processed within the same brain regions, though virtual tools recruit bilateral tool processing regions to a greater extent than physical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.
A tool for modeling concurrent real-time computation
NASA Technical Reports Server (NTRS)
Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.
1990-01-01
Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.
Verification and Validation of NASA-Supported Enhancements to Decision Support Tools of PECAD
NASA Technical Reports Server (NTRS)
Ross, Kenton W.; McKellip, Rodney; Moore, Roxzana F.; Fendley, Debbie
2005-01-01
This section of the evaluation report summarizes the verification and validation (V&V) of recently implemented, NASA-supported enhancements to the decision support tools of the Production Estimates and Crop Assessment Division (PECAD). The implemented enhancements include operationally tailored Moderate Resolution Imaging Spectroradiometer (MODIS) products and products of the Global Reservoir and Lake Monitor (GRLM). The MODIS products are currently made available through two separate decision support tools: the MODIS Image Gallery and the U.S. Department of Agriculture (USDA) Foreign Agricultural Service (FAS) MODIS Normalized Difference Vegetation Index (NDVI) Database. Both the Global Reservoir and Lake Monitor and MODIS Image Gallery provide near-real-time products through PECAD's CropExplorer. This discussion addresses two areas: 1. Assessments of the standard NASA products on which these enhancements are based. 2. Characterizations of the performance of the new operational products.
Chaplain Documentation and the Electronic Medical Record: A Survey of ACPE Residency Programs.
Tartaglia, Alexander; Dodd-McCue, Diane; Ford, Timothy; Demm, Charles; Hassell, Alma
2016-01-01
This study explores the extent to which chaplaincy departments at ACPE-accredited residency programs make use of the electronic medical record (EMR) for documentation and training. Survey data solicited from 219 programs with a 45% response rate and interview findings from 11 centers demonstrate a high level of usage of the EMR as well as an expectation that CPE residents document each patient/family encounter. Centers provided considerable initial training, but less ongoing monitoring of chaplain documentation. Centers used multiple sources to develop documentation tools for the EMR. One center was verified as having created the spiritual assessment component of the documentation tool from a peer reviewed published model. Interviews found intermittent use of the student chart notes for educational purposes. One center verified a structured manner of monitoring chart notes as a performance improvement activity. Findings suggested potential for the development of a standard documentation tool for chaplain charting and training.
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
Data visualization as a tool for improved decision making within transit agencies
DOT National Transportation Integrated Search
2007-02-01
TriMet, the regional transit provider in the Portland, OR, area has been a leader in bus transit performance monitoring using data collected via automatic vehicle location and automatic passenger counter technologies. This information is collected an...
DOT National Transportation Integrated Search
2007-01-01
The focus of the surface transportation community has been steadily shifting over the past decade, from one of capital construction and maintenance toward system operations. To support this new focus, new monitoring tools are necessary. The Virginia ...
Scoring Tools for the Analysis of Infant Respiratory Inductive Plethysmography Signals.
Robles-Rubio, Carlos Alejandro; Bertolizio, Gianluca; Brown, Karen A; Kearney, Robert E
2015-01-01
Infants recovering from anesthesia are at risk of life threatening Postoperative Apnea (POA). POA events are rare, and so the study of POA requires the analysis of long cardiorespiratory records. Manual scoring is the preferred method of analysis for these data, but it is limited by low intra- and inter-scorer repeatability. Furthermore, recommended scoring rules do not provide a comprehensive description of the respiratory patterns. This work describes a set of manual scoring tools that address these limitations. These tools include: (i) a set of definitions and scoring rules for 6 mutually exclusive, unique patterns that fully characterize infant respiratory inductive plethysmography (RIP) signals; (ii) RIPScore, a graphical, manual scoring software to apply these rules to infant data; (iii) a library of data segments representing each of the 6 patterns; (iv) a fully automated, interactive formal training protocol to standardize the analysis and establish intra- and inter-scorer repeatability; and (v) a quality control method to monitor scorer ongoing performance over time. To evaluate these tools, three scorers from varied backgrounds were recruited and trained to reach a performance level similar to that of an expert. These scorers used RIPScore to analyze data from infants at risk of POA in two separate, independent instances. Scorers performed with high accuracy and consistency, analyzed data efficiently, had very good intra- and inter-scorer repeatability, and exhibited only minor confusion between patterns. These results indicate that our tools represent an excellent method for the analysis of respiratory patterns in long data records. Although the tools were developed for the study of POA, their use extends to any study of respiratory patterns using RIP (e.g., sleep apnea, extubation readiness). Moreover, by establishing and monitoring scorer repeatability, our tools enable the analysis of large data sets by multiple scorers, which is essential for longitudinal and multicenter studies.
A solution for exposure tool optimization at the 65-nm node and beyond
NASA Astrophysics Data System (ADS)
Itai, Daisuke
2007-03-01
As device geometries shrink, tolerances for critical dimension, focus, and overlay control decrease. For the stable manufacture of semiconductor devices at (and beyond) the 65nm node, both performance variability and drift in exposure tools are no longer negligible factors. With EES (Equipment Engineering System) as a guidepost, hopes of improving productivity of semiconductor manufacturing are growing. We are developing a system, EESP (Equipment Engineering Support Program), based on the concept of EES. The EESP system collects and stores large volumes of detailed data generated from Canon lithographic equipment while product is being manufactured. It uses that data to monitor both equipment characteristics and process characteristics, which cannot be examined without this system. The goal of EESP is to maximize equipment capabilities, by feeding the result back to APC/FDC and the equipment maintenance list. This was a collaborative study of the system's effectiveness at the device maker's factories. We analyzed the performance variability of exposure tools by using focus residual data. We also attempted to optimize tool performance using the analyzed results. The EESP system can make the optimum performance of exposure tools available to the device maker.
ESH assessment of advanced lithography materials and processes
NASA Astrophysics Data System (ADS)
Worth, Walter F.; Mallela, Ram
2004-05-01
The ESH Technology group at International SEMATECH is conducting environment, safety, and health (ESH) assessments in collaboration with the lithography technologists evaluating the performance of an increasing number of new materials and technologies being considered for advanced lithography such as 157nm photresist and extreme ultraviolet (EUV). By performing data searches for 75 critical data types, emissions characterizations, and industrial hygiene (IH) monitoring during the use of the resist candidates, it has been shown that the best performing resist formulations, so far, appear to be free of potential ESH concerns. The ESH assessment of the EUV lithography tool that is being developed for SEMATECH has identified several features of the tool that are of ESH concern: high energy consumption, poor energy conversion efficiency, tool complexity, potential ergonomic and safety interlock issues, use of high powered laser(s), generation of ionizing radiation (soft X-rays), need for adequate shielding, and characterization of the debris formed by the extreme temperature of the plasma. By bringing these ESH challenges to the attention of the technologists and tool designers, it is hoped that the processes and tools can be made more ESH friendly.
Siefarth, Caroline; Serfert, Yvonne; Drusch, Stephan; Buettner, Andrea
2013-01-01
The challenge in the development of infant formulas enriched with polyunsaturated fatty acids (PUFAs) is to meet the consumers’ expectations with regard to high nutritional and sensory value. In particular, PUFAs may be prone to fatty acid oxidation that can generate potential rancid, metallic and/or fishy off-flavors. Although such off-flavors pose no health risk, they can nevertheless lead to rejection of products by consumers. Thus, monitoring autoxidation at its early stages is of great importance and finding a suitable analytical tool to perform these evaluations is therefore of high interest in quality monitoring. Two formulations of infant formulas were varied systematically in their mineral composition and their presence of antioxidants to produce 18 model formulas. All models were aged under controlled conditions and their oxidative deterioration was monitored. A quantitative study was performed on seven characteristic odor-active secondary oxidation products in the formulations via two-dimensional high resolution gas chromatography-mass spectrometry/olfactometry (2D-HRGC-MS/O). The sensitivity of the multi-dimensional GC-MS/O analysis was supported by two additional analytical tools for monitoring autoxidation, namely the analysis of lipid hydroperoxides and conjugated dienes. Furthermore, an aroma profile analysis (APA) was performed to reveal the presence and intensities of typical odor qualities generated in the course of fatty acid oxidation. The photometrical analyses of lipid hydroperoxides and conjugated dienes were found to be too insensitive for early indication of the development of sensory defects. By comparison, the 2D-HRGC-MS/O was capable of monitoring peroxidation of PUFAs at low ppb-level in its early stages. Thereby, it was possible to screen oxidative variances on the basis of such volatile markers already within eight weeks after production of the products, which is an earlier indication of oxidative deterioration than achievable via conventional methods. In detail, oxidative variances between the formulations revealed that lipid oxidation was low when copper was administered in an encapsulated form and when antioxidants (vitamin E, ascorbyl palmitate) were present. PMID:28234303
Tube cutter tool and method of use for coupon removal
Nachbar, H.D.; Etten, M.P. Jr.; Kurowski, P.A.
1997-05-06
A tube cutter tool is insertable into a tube for cutting a coupon from a damaged site on the exterior of the tube. Prior to using the tool, the damaged site is first located from the interior of the tube using a multi-coil pancake eddy current test probe. The damaged site is then marked. A fiber optic probe is used to monitor the subsequent cutting procedure which is performed using a hole saw mounted on the tube cutter tool. Prior to completion of the cutting procedure, a drill in the center of the hole saw is drilled into the coupon to hold it in place. 4 figs.
Tube cutter tool and method of use for coupon removal
Nachbar, Henry D.; Etten, Jr., Marvin P.; Kurowski, Paul A.
1997-01-01
A tube cutter tool is insertable into a tube for cutting a coupon from a damaged site on the exterior of the tube. Prior to using the tool, the damaged site is first located from the interior of the tube using a multi-coil pancake eddy current test probe. The damaged site is then marked. A fiber optic probe is used to monitor the subsequent cutting procedure which is performed using a hole saw mounted on the tube cutter tool. Prior to completion of the cutting procedure, a drill in the center of the hole saw is drilled into the coupon to hold it in place.
FFI: A software tool for ecological monitoring
Duncan C. Lutes; Nathan C. Benson; MaryBeth Keifer; John F. Caratti; S. Austin Streetman
2009-01-01
A new monitoring tool called FFI (FEAT/FIREMON Integrated) has been developed to assist managers with collection, storage and analysis of ecological information. The tool was developed through the complementary integration of two fire effects monitoring systems commonly used in the United States: FIREMON and the Fire Ecology Assessment Tool. FFI provides software...
Monitoring the CMS strip tracker readout system
NASA Astrophysics Data System (ADS)
Mersi, S.; Bainbridge, R.; Baulieu, G.; Bel, S.; Cole, J.; Cripps, N.; Delaere, C.; Drouhin, F.; Fulcher, J.; Giassi, A.; Gross, L.; Hahn, K.; Mirabito, L.; Nikolic, M.; Tkaczyk, S.; Wingham, M.
2008-07-01
The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system.
A strategic approach for Water Safety Plans implementation in Portugal.
Vieira, Jose M P
2011-03-01
Effective risk assessment and risk management approaches in public drinking water systems can benefit from a systematic process for hazards identification and effective management control based on the Water Safety Plan (WSP) concept. Good results from WSP development and implementation in a small number of Portuguese water utilities have shown that a more ambitious nationwide strategic approach to disseminate this methodology is needed. However, the establishment of strategic frameworks for systematic and organic scaling-up of WSP implementation at a national level requires major constraints to be overcome: lack of legislation and policies and the need for appropriate monitoring tools. This study presents a framework to inform future policy making by understanding the key constraints and needs related to institutional, organizational and research issues for WSP development and implementation in Portugal. This methodological contribution for WSP implementation can be replicated at a global scale. National health authorities and the Regulator may promote changes in legislation and policies. Independent global monitoring and benchmarking are adequate tools for measuring the progress over time and for comparing the performance of water utilities. Water utilities self-assessment must include performance improvement, operational monitoring and verification. Research and education and resources dissemination ensure knowledge acquisition and transfer.
2015-09-17
turbines , SHM tools, maintenance scheduling, and performance of the SHM system determine the added value of the system of systems (A. Van Horenbeek...J. R., & Pintelon, L. (2013). Quantifying the added value of an imperfectly performing condition monitoring system— Application to a wind turbine ...INTEGRATED SYSTEMS HEALTH MANAGEMENT AS AN ENABLER FOR CONDITION BASED MAINTENANCE AND AUTONOMIC
Tranzit XPress : hazardous material fleet management and monitoring system : evaluation report
DOT National Transportation Integrated Search
1997-07-01
In this report the evaluation performed on the first phase of the Tranzit XPress system is presented. The system comprises of a traffic/safety control center, motor vehicle instrumentation, and a variety of off vehicle tools that communicate with eac...
Applying Molecular Tools for Monitoring Inhibition of Nitrification by Heavy Metals
The biological removal of ammonia in conventional wastewater treatment plants (WWTPs) is performed by promoting nitrification and denitrification as sequential steps. The first step in nitrification, the oxidation of ammonia to nitrite by ammonia oxidizing bacteria (AOB), is sens...
MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.
Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui
2015-12-12
Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.
Multi-category micro-milling tool wear monitoring with continuous hidden Markov models
NASA Astrophysics Data System (ADS)
Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon
2009-02-01
In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.
Performance Assessment as a Diagnostic Tool for Science Teachers
NASA Astrophysics Data System (ADS)
Kruit, Patricia; Oostdam, Ron; van den Berg, Ed; Schuitema, Jaap
2018-04-01
Information on students' development of science skills is essential for teachers to evaluate and improve their own education, as well as to provide adequate support and feedback to the learning process of individual students. The present study explores and discusses the use of performance assessments as a diagnostic tool for formative assessment to inform teachers and guide instruction of science skills in primary education. Three performance assessments were administered to more than 400 students in grades 5 and 6 of primary education. Students performed small experiments using real materials while following the different steps of the empirical cycle. The mutual relationship between the three performance assessments is examined to provide evidence for the value of performance assessments as useful tools for formative evaluation. Differences in response patterns are discussed, and the diagnostic value of performance assessments is illustrated with examples of individual student performances. Findings show that the performance assessments were difficult for grades 5 and 6 students but that much individual variation exists regarding the different steps of the empirical cycle. Evaluation of scores as well as a more substantive analysis of students' responses provided insight into typical errors that students make. It is concluded that performance assessments can be used as a diagnostic tool for monitoring students' skill performance as well as to support teachers in evaluating and improving their science lessons.
Using a portable sulfide monitor as a motivational tool: a clinical study.
Uppal, Ranjit Singh; Malhotra, Ranjan; Grover, Vishakha; Grover, Deepak
2012-01-01
Bad breath has a significant impact on daily life of those who suffer from it. Oral malodor may rank only behind dental caries and periodontal disease as the cause of patient's visit to dentist. An aim of this study was to use a portable sulfide monitor as a motivational tool for encouraging the patients towards the better oral hygiene by correlating the plaque scores with sulfide monitor scores, and comparing the sulfide monitor scores before and after complete prophylaxis and 3 months after patient motivation. 30 patients with chronic periodontitis, having chief complaint of oral malodor participated in this study. At first visit, the plaque scores (P1) and sulfide monitor scores before (BCR1) and after complete oral prophylaxis (BCR2) were taken. Then the patients were motivated towards the better oral hygiene. After 3 months, plaque scores (P2) and sulfide monitor scores (BCR3) were recorded again. It was done using SPSS (student package software for statistical analysis). Paired sample test was performed. Statistically significant reduction in sulfide monitor scores was reported after the complete oral prophylaxis and 3 months after patient motivation. Plaque scores were significantly reduced after a period of 3 months. Plaque scores and breathchecker scores were positively correlated. An intensity of the oral malodor was positively correlated with the plaque scores. The portable sulfide monitor was efficacious in motivating the patients towards the better oral hygiene.
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
Guerlain, Stephanie; Adams, Reid B; Turrentine, F Beth; Shin, Thomas; Guo, Hui; Collins, Stephen R; Calland, J Forrest
2005-01-01
The objective of this research was to develop a digital system to archive the complete operative environment along with the assessment tools for analysis of this data, allowing prospective studies of operative performance, intraoperative errors, team performance, and communication. Ability to study this environment will yield new insights, allowing design of systems to avoid preventable errors that contribute to perioperative complications. A multitrack, synchronized, digital audio-visual recording system (RATE tool) was developed to monitor intraoperative performance, including software to synchronize data and allow assignment of independent observational scores. Cases were scored for technical performance, participants' situational awareness (knowledge of critical information), and their comfort and satisfaction with the conduct of the procedure. Laparoscopic cholecystectomy (n = 10) was studied. Technical performance of the RATE tool was excellent. The RATE tool allowed real time, multitrack data collection of all aspects of the operative environment, while permitting digital recording of the objective assessment data in a time synchronized and annotated fashion during the procedure. The mean technical performance score was 73% +/- 28% of maximum (perfect) performance. Situational awareness varied widely among team members, with the attending surgeon typically the only team member having comprehensive knowledge of critical case information. The RATE tool allows prospective analysis of performance measures such as technical judgments, team performance, and communication patterns, offers the opportunity to conduct prospective intraoperative studies of human performance, and allows for postoperative discussion, review, and teaching. This study also suggests that gaps in situational awareness might be an underappreciated source of operative adverse events. Future uses of this system will aid teaching, failure or adverse event analysis, and intervention research.
High throughput wafer defect monitor for integrated metrology applications in photolithography
NASA Astrophysics Data System (ADS)
Rao, Nagaraja; Kinney, Patrick; Gupta, Anand
2008-03-01
The traditional approach to semiconductor wafer inspection is based on the use of stand-alone metrology tools, which while highly sensitive, are large, expensive and slow, requiring inspection to be performed off-line and on a lot sampling basis. Due to the long cycle times and sparse sampling, the current wafer inspection approach is not suited to rapid detection of process excursions that affect yield. The semiconductor industry is gradually moving towards deploying integrated metrology tools for real-time "monitoring" of product wafers during the manufacturing process. Integrated metrology aims to provide end-users with rapid feedback of problems during the manufacturing process, and the benefit of increased yield, and reduced rework and scrap. The approach of monitoring 100% of the wafers being processed requires some trade-off in sensitivity compared to traditional standalone metrology tools, but not by much. This paper describes a compact, low-cost wafer defect monitor suitable for integrated metrology applications and capable of detecting submicron defects on semiconductor wafers at an inspection rate of about 10 seconds per wafer (or 360 wafers per hour). The wafer monitor uses a whole wafer imaging approach to detect defects on both un-patterned and patterned wafers. Laboratory tests with a prototype system have demonstrated sensitivity down to 0.3 µm on un-patterned wafers and down to 1 µm on patterned wafers, at inspection rates of 10 seconds per wafer. An ideal application for this technology is preventing photolithography defects such as "hot spots" by implementing a wafer backside monitoring step prior to exposing wafers in the lithography step.
Cerebral Oximetry as an Auxiliary Diagnostic Tool in the Diagnosis of Brain Death.
Tatli, O; Bekar, O; Imamoglu, M; Gonenc Cekic, O; Aygun, A; Eryigit, U; Karaca, Y; Sahin, A; Turkmen, S; Turedi, S
2017-10-01
To investigate the efficacy of cerebral oximetry (CO) as an auxiliary diagnostic tool in brain death (BD). This observational case-control study was performed on patients with suspected BD. Patients with diagnosis of BD confirmed by the brain death committee were enrolled as the BD group and other patients as the non-BD group. CO monitoring was performed at least 6 h, and cerebral tissue oxygen saturation (ScO 2 ) parameters were compared. Mean ScO 2 level in the BD group was lower than non-brain-dead patients: mean difference for right lobe = 6.48 (95% confidence interval [CI] 0.08-12.88) and for left lobe = 6.09 (95% CI -0.22-12.41). Maximum ScO 2 values in the BD group were significantly lower than the non-BD group: mean difference for right lobe = 8.20 (95% CI 1.64-14.77) and for left lobe = 9.54 (95% CI 3.06-16.03). The area under the curve for right lobe maximum ScO 2 was 0.69 (95% CI 0.55-0.81) and for left lobe was 0.72 (95% CI 0.58-0.84). Maximum ScO 2 in brain-dead patients at CO monitoring is significantly low. However, this cannot be used to differentiate brain-dead and non-brain-dead patients. CO monitoring is therefore not an appropriate auxiliary diagnostic tool for confirming BD. Copyright © 2017 Elsevier Inc. All rights reserved.
A Business Analytics Software Tool for Monitoring and Predicting Radiology Throughput Performance.
Jones, Stephen; Cournane, Seán; Sheehy, Niall; Hederman, Lucy
2016-12-01
Business analytics (BA) is increasingly being utilised by radiology departments to analyse and present data. It encompasses statistical analysis, forecasting and predictive modelling and is used as an umbrella term for decision support and business intelligence systems. The primary aim of this study was to determine whether utilising BA technologies could contribute towards improved decision support and resource management within radiology departments. A set of information technology requirements were identified with key stakeholders, and a prototype BA software tool was designed, developed and implemented. A qualitative evaluation of the tool was carried out through a series of semi-structured interviews with key stakeholders. Feedback was collated, and emergent themes were identified. The results indicated that BA software applications can provide visibility of radiology performance data across all time horizons. The study demonstrated that the tool could potentially assist with improving operational efficiencies and management of radiology resources.
Use of electronic medical record data for quality improvement in schizophrenia treatment.
Owen, Richard R; Thrush, Carol R; Cannon, Dale; Sloan, Kevin L; Curran, Geoff; Hudson, Teresa; Austen, Mark; Ritchie, Mona
2004-01-01
An understanding of the strengths and limitations of automated data is valuable when using administrative or clinical databases to monitor and improve the quality of health care. This study discusses the feasibility and validity of using data electronically extracted from the Veterans Health Administration (VHA) computer database (VistA) to monitor guideline performance for inpatient and outpatient treatment of schizophrenia. The authors also discuss preliminary results and their experience in applying these methods to monitor antipsychotic prescribing using the South Central VA Healthcare Network (SCVAHCN) Data Warehouse as a tool for quality improvement.
Individually Coded Telemetry: a Tool for Studying Heart Rate and Behaviour in Reindeer Calves
Eloranta, E; Norberg, H; Nilsson, A; Pudas, T; Säkkinen, H
2002-01-01
The aim of the study was to test the performance of a silver wire modified version of the coded telemetric heart rate monitor Polar Vantage NV™ (PVNV) and to measure heart rate (HR) in a group of captive reindeer calves during different behaviour. The technical performance of PVNV HR monitors was tested in cold conditions (-30°C) using a pulse generator and the correlation between generated pulse and PVNV values was high (r = 0.9957). The accuracy was tested by comparing the HR obtained with the PVNV monitor with the standard ECG, and the correlation was significant (r = 0.9965). Both circadian HR and HR related to behavioural pattern were recorded. A circadian rhythm was observed in the HR in reindeer with a minimum during night and early morning hours and maximum at noon and during the afternoon, the average HR of the reindeer calves studied being 42.5 beats/min in February. The behaviour was recorded by focal individual observations and the data was synchronized with the output of the HR monitors. Running differed from all other behavioural categories in HR. Inter-individual differences were seen expressing individual responses to external and internal stimuli. The silver wire modified Polar Vantage NV™ provides a suitable and reliable tool for measuring heart rate in reindeer, also in natural conditions. PMID:12564543
Automated Diabetic Retinopathy Screening and Monitoring Using Retinal Fundus Image Analysis.
Bhaskaranand, Malavika; Ramachandra, Chaithanya; Bhat, Sandeep; Cuadros, Jorge; Nittala, Muneeswar Gupta; Sadda, SriniVas; Solanki, Kaushal
2016-02-16
Diabetic retinopathy (DR)-a common complication of diabetes-is the leading cause of vision loss among the working-age population in the western world. DR is largely asymptomatic, but if detected at early stages the progression to vision loss can be significantly slowed. With the increasing diabetic population there is an urgent need for automated DR screening and monitoring. To address this growing need, in this article we discuss an automated DR screening tool and extend it for automated estimation of microaneurysm (MA) turnover, a potential biomarker for DR risk. The DR screening tool automatically analyzes color retinal fundus images from a patient encounter for the various DR pathologies and collates the information from all the images belonging to a patient encounter to generate a patient-level screening recommendation. The MA turnover estimation tool aligns retinal images from multiple encounters of a patient, localizes MAs, and performs MA dynamics analysis to evaluate new, persistent, and disappeared lesion maps and estimate MA turnover rates. The DR screening tool achieves 90% sensitivity at 63.2% specificity on a data set of 40 542 images from 5084 patient encounters obtained from the EyePACS telescreening system. On a subset of 7 longitudinal pairs the MA turnover estimation tool identifies new and disappeared MAs with 100% sensitivity and average false positives of 0.43 and 1.6 respectively. The presented automated tools have the potential to address the growing need for DR screening and monitoring, thereby saving vision of millions of diabetic patients worldwide. © 2016 Diabetes Technology Society.
17 CFR 49.17 - Access to SDR data.
Code of Federal Regulations, 2013 CFR
2013-04-01
... legal and statutory responsibilities under the Act and related regulations. (2) Monitoring tools. A registered swap data repository is required to provide the Commission with proper tools for the monitoring... data structure and content. These monitoring tools shall be substantially similar in analytical...
17 CFR 49.17 - Access to SDR data.
Code of Federal Regulations, 2014 CFR
2014-04-01
... legal and statutory responsibilities under the Act and related regulations. (2) Monitoring tools. A registered swap data repository is required to provide the Commission with proper tools for the monitoring... data structure and content. These monitoring tools shall be substantially similar in analytical...
On-line Monitoring for Cutting Tool Wear Condition Based on the Parameters
NASA Astrophysics Data System (ADS)
Han, Fenghua; Xie, Feng
2017-07-01
In the process of cutting tools, it is very important to monitor the working state of the tools. On the basis of acceleration signal acquisition under the constant speed, time domain and frequency domain analysis of relevant indicators monitor the online of tool wear condition. The analysis results show that the method can effectively judge the tool wear condition in the process of machining. It has certain application value.
2013-01-01
Background We describe the setup of a neonatal quality improvement tool and list which peer-reviewed requirements it fulfils and which it does not. We report on the so-far observed effects, how the units can identify quality improvement potential, and how they can measure the effect of changes made to improve quality. Methods Application of a prospective longitudinal national cohort data collection that uses algorithms to ensure high data quality (i.e. checks for completeness, plausibility and reliability), and to perform data imaging (Plsek’s p-charts and standardized mortality or morbidity ratio SMR charts). The collected data allows monitoring a study collective of very low birth-weight infants born from 2009 to 2011 by applying a quality cycle following the steps ′guideline – perform - falsify – reform′. Results 2025 VLBW live-births from 2009 to 2011 representing 96.1% of all VLBW live-births in Switzerland display a similar mortality rate but better morbidity rates when compared to other networks. Data quality in general is high but subject to improvement in some units. Seven measurements display quality improvement potential in individual units. The methods used fulfil several international recommendations. Conclusions The Quality Cycle of the Swiss Neonatal Network is a helpful instrument to monitor and gradually help improve the quality of care in a region with high quality standards and low statistical discrimination capacity. PMID:24074151
USING DIRECT-PUSH TOOLS TO MAP HYDROSTRATIGRAPHY AND PREDICT MTBE PLUME DIVING
MTBE plumes have been documented to dive beneath screened intervals of conventional monitoring well networks at a number of LUST sites. This behavior makes these plumes difficult both to detect and remediate. Electrical conductivity logging and pneumatic slug testing performed in...
The Electronic Supervisor: New Technology, New Tensions.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Office of Technology Assessment.
Computer technology has made it possible for employers to collect and analyze management information about employees' work performance and equipment use. There are three main tools for supervising office activities. Computer-based (electronic) monitoring systems automatically record statistics about the work of employees using computer or…
Using robotics construction kits as metacognitive tools: a research in an Italian primary school.
La Paglia, Filippo; Caci, Barbara; La Barbera, Daniele; Cardaci, Maurizio
2010-01-01
The present paper is aimed at analyzing the process of building and programming robots as a metacognitive tool. Quantitative data and qualitative observations from a research performed in a sample of children attending an Italian primary school are described in this work. Results showed that robotics activities may be intended as a new metacognitive environment that allows children to monitor themselves and control their learning actions in an autonomous and self-centered way.
Clinical Assessment of Risk Management: an INtegrated Approach (CARMINA).
Tricarico, Pierfrancesco; Tardivo, Stefano; Sotgiu, Giovanni; Moretti, Francesca; Poletti, Piera; Fiore, Alberto; Monturano, Massimo; Mura, Ida; Privitera, Gaetano; Brusaferro, Silvio
2016-08-08
Purpose - The European Union recommendations for patient safety calls for shared clinical risk management (CRM) safety standards able to guide organizations in CRM implementation. The purpose of this paper is to develop a self-evaluation tool to measure healthcare organization performance on CRM and guide improvements over time. Design/methodology/approach - A multi-step approach was implemented including: a systematic literature review; consensus meetings with an expert panel from eight Italian leader organizations to get to an agreement on the first version; field testing to test instrument feasibility and flexibility; Delphi strategy with a second expert panel for content validation and balanced scoring system development. Findings - The self-assessment tool - Clinical Assessment of Risk Management: an INtegrated Approach includes seven areas (governance, communication, knowledge and skills, safe environment, care processes, adverse event management, learning from experience) and 52 standards. Each standard is evaluated according to four performance levels: minimum; monitoring; outcomes; and improvement actions, which resulted in a feasible, flexible and valid instrument to be used throughout different organizations. Practical implications - This tool allows practitioners to assess their CRM activities compared to minimum levels, monitor performance, benchmarking with other institutions and spreading results to different stakeholders. Originality/value - The multi-step approach allowed us to identify core minimum CRM levels in a field where no consensus has been reached. Most standards may be easily adopted in other countries.
The challenges and promises of genetic approaches for ballast water management
NASA Astrophysics Data System (ADS)
Rey, Anaïs; Basurko, Oihane C.; Rodríguez-Ezpeleta, Naiara
2018-03-01
Ballast water is a main vector of introduction of Harmful Aquatic Organisms and Pathogens, which includes Non-Indigenous Species. Numerous and diversified organisms are transferred daily from a donor to a recipient port. Developed to prevent these introduction events, the International Convention for the Control and Management of Ships' Ballast Water and Sediments will enter into force in 2017. This international convention is asking for the monitoring of Harmful Aquatic Organisms and Pathogens. In this review, we highlight the urgent need to develop cost-effective methods to: (1) perform the biological analyses required by the convention; and (2) assess the effectiveness of two main ballast water management strategies, i.e. the ballast water exchange and the use of ballast water treatment systems. We have compiled the biological analyses required by the convention, and performed a comprehensive evaluation of the potential and challenges of the use of genetic tools in this context. Following an overview of the studies applying genetic tools to ballast water related research, we present metabarcoding as a relevant approach for early detection of Harmful Aquatic Organisms and Pathogens in general and for ballast water monitoring and port risk assessment in particular. Nonetheless, before implementation of genetic tools in the context of the ballast water management convention, benchmarked tests against traditional methods should be performed, and standard, reproducible and easy to apply protocols should be developed.
NASA Astrophysics Data System (ADS)
Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang
2018-03-01
As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
Development of a Pre-Prototype Power Assisted Glove End Effector for Extravehicular Activity
NASA Technical Reports Server (NTRS)
1986-01-01
The purpose of this program was to develop an EVA power tool which is capable of performing a variety of functions while at the same time increasing the EVA crewmember's effectiveness by reducing hand fatigue associated with gripping tools through a pressurized EMU glove. The Power Assisted Glove End Effector (PAGE) preprototype hardware met or exceeded all of its technical requirements and has incorporated acoustic feedback to allow the EVA crewmember to monitor motor loading and speed. If this tool is to be developed for flight use, several issues need to be addressed. These issues are listed.
van den Berg, Michael J; Kringos, Dionne S; Marks, Lisanne K; Klazinga, Niek S
2014-01-09
In 2006, the first edition of a monitoring tool for the performance of the Dutch health care system was released: the Dutch Health Care Performance Report (DHCPR). The Netherlands was among the first countries in the world developing such a comprehensive tool for reporting performance on quality, access, and affordability of health care. The tool contains 125 performance indicators; the choice for specific indicators resulted from a dialogue between researchers and policy makers. In the 'policy cycle', the DHCPR can rationally be placed between evaluation (accountability) and agenda-setting (for strategic decision making). In this paper, we reflect on important lessons learned after seven years of health care system performance assessment. These lessons entail the importance of a good conceptual framework for health system performance assessment, the importance of repeated measurement, the strength of combining multiple perspectives (e.g., patient, professional, objective, subjective) on the same issue, the importance of a central role for the patients' perspective in performance assessment, how to deal with the absence of data in relevant domains, the value of international benchmarking and the continuous exchange between researchers and policy makers.
Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris
2014-01-01
Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Varela Rodriguez, F.
2011-12-01
The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.
Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum
NASA Astrophysics Data System (ADS)
Guan, Shan; Song, Weijie; Pang, Hongyang
2017-09-01
In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.
Niaksu, Olegas; Zaptorius, Jonas
2014-01-01
This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.
Using the Leitz LMS 2000 for monitoring and improvement of an e-beam
NASA Astrophysics Data System (ADS)
Blaesing-Bangert, Carola; Roeth, Klaus-Dieter; Ogawa, Yoichi
1994-11-01
Kaizen--a continuously improving--is a philosophy lived in Japan which is also becoming more and more important in Western companies. To implement this philosophy in the semiconductor industry, a high performance metrology tool is essential to determine the status of production quality periodically. An important prerequisite for statistical process control is the high stability of the metrology tool over several months or years; the tool-induced shift should be as small as possible. The pattern placement metrology tool Leitz LMS 2000 has been used in a major European mask house for several years now to qualify masks within the tightest specifications and to monitor the MEBES III and its cassettes. The mask shop's internal specification for the long term repeatability of the pattern placement metrology tool is 19 nm instead of 42 nm as specified by the supplier of the tool. Then the process capability of the LMS 2000 over 18 months is represented by an average cpk value of 2.8 for orthogonality, 5.2 for x-scaling, and 3.0 for y-scaling. The process capability of the MEBES III and its cassettes was improved in the past years. For instance, 100% of the masks produced with a process tolerance of +/- 200 nm are now within this limit.
Monitoring machining conditions by infrared images
NASA Astrophysics Data System (ADS)
Borelli, Joao E.; Gonzaga Trabasso, Luis; Gonzaga, Adilson; Coelho, Reginaldo T.
2001-03-01
During machining process the knowledge of the temperature is the most important factor in tool analysis. It allows to control main factors that influence tool use, life time and waste. The temperature in the contact area between the piece and the tool is resulting from the material removal in cutting operation and it is too difficult to be obtained because the tool and the work piece are in motion. One way to measure the temperature in this situation is detecting the infrared radiation. This work presents a new methodology for diagnosis and monitoring of machining processes with the use of infrared images. The infrared image provides a map in gray tones of the elements in the process: tool, work piece and chips. Each gray tone in the image corresponds to a certain temperature for each one of those materials and the relationship between the gray tones and the temperature is gotten by the previous of infrared camera calibration. The system developed in this work uses an infrared camera, a frame grabber board and a software composed of three modules. The first module makes the image acquisition and processing. The second module makes the feature image extraction and performs the feature vector. Finally, the third module uses fuzzy logic to evaluate the feature vector and supplies the tool state diagnostic as output.
Use of a monitoring tool for growth and development in Brazilian children – systematic review
de Almeida, Ana Claudia; Mendes, Larissa da Costa; Sad, Izabela Rocha; Ramos, Eloane Gonçalves; Fonseca, Vânia Matos; Peixoto, Maria Virginia Marques
2016-01-01
Abstract Objective: To assess the use of a health monitoring tool in Brazilian children, with emphasis on the variables related to growth and development, which are crucial aspects of child health care. Data source: A systematic review of the literature was carried out in studies performed in Brazil, using the Cochrane Brazil, Lilacs, SciELO and Medline databases. The descriptors and keywords used were “growth and development”, “child development”, “child health record”, “child health handbook”, “health record and child” and “child handbook”, as well as the equivalent terms in Portuguese. Studies were screened by title and summary and those considered eligible were read in full. Data synthesis: Sixty-eight articles were identified and eight articles were included in the review, as they carried out a quantitative analysis of the filling out of information. Five studies assessed the completion of the Child's Health Record and three of the Child's Health Handbook. All articles concluded that the information was not properly recorded. Growth monitoring charts were rarely filled out, reaching 96.3% in the case of weight for age. The use of the BMI chart was not reported, despite the growing rates of childhood obesity. Only two studies reported the completion of development milestones and, in these, the milestones were recorded in approximately 20% of the verified tools. Conclusions: The results of the assessed articles disclosed underutilization of the tool and reflect low awareness by health professionals regarding the recording of information in the child's health monitoring document. PMID:26705605
de Almeida, Ana Claudia; Mendes, Larissa da Costa; Sad, Izabela Rocha; Ramos, Eloane Gonçalves; Fonseca, Vânia Matos; Peixoto, Maria Virginia Marques
2016-01-01
To assess the use of a health monitoring tool in Brazilian children, with emphasis on the variables related to growth and development, which are crucial aspects of child health care. A systematic review of the literature was carried out in studies performed in Brazil, using the Cochrane Brazil, Lilacs, SciELO and Medline databases. The descriptors and keywords used were "growth and development", "child development", "child health record", "child health handbook", "health record and child" and "child handbook", as well as the equivalent terms in Portuguese. Studies were screened by title and summary and those considered eligible were read in full. Sixty-eight articles were identified and eight articles were included in the review, as they carried out a quantitative analysis of the filling out of information. Five studies assessed the completion of the Child's Health Record and three of the Child's Health Handbook. All articles concluded that the information was not properly recorded. Growth monitoring charts were rarely filled out, reaching 96.3% in the case of weight for age. The use of the BMI chart was not reported, despite the growing rates of childhood obesity. Only two studies reported the completion of development milestones and, in these, the milestones were recorded in approximately 20% of the verified tools. The results of the assessed articles disclosed underutilization of the tool and reflect low awareness by health professionals regarding the recording of information in the child's health monitoring document. Copyright © 2015 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.
Visiting Vehicle Ground Trajectory Tool
NASA Technical Reports Server (NTRS)
Hamm, Dustin
2013-01-01
The International Space Station (ISS) Visiting Vehicle Group needed a targeting tool for vehicles that rendezvous with the ISS. The Visiting Vehicle Ground Trajectory targeting tool provides the ability to perform both realtime and planning operations for the Visiting Vehicle Group. This tool provides a highly reconfigurable base, which allows the Visiting Vehicle Group to perform their work. The application is composed of a telemetry processing function, a relative motion function, a targeting function, a vector view, and 2D/3D world map type graphics. The software tool provides the ability to plan a rendezvous trajectory for vehicles that visit the ISS. It models these relative trajectories using planned and realtime data from the vehicle. The tool monitors ongoing rendezvous trajectory relative motion, and ensures visiting vehicles stay within agreed corridors. The software provides the ability to update or re-plan a rendezvous to support contingency operations. Adding new parameters and incorporating them into the system was previously not available on-the-fly. If an unanticipated capability wasn't discovered until the vehicle was flying, there was no way to update things.
NASA Astrophysics Data System (ADS)
Smajgl, A.; Larson, S.; Hug, B.; De Freitas, D. M.
2010-12-01
SummaryThis paper presents a tool for documenting and monitoring water use benefits in the Great Barrier Reef catchments that allows temporal and spatial comparison along the region. Water, water use benefits and water allocations are currently receiving much attention from Australian policy makers and conservation practitioners. Because of the inherent complexity and variability in water quality, it is essential that scientific information is presented in a meaningful way to policy makers, managers and ultimately, to the general public who have to live with the consequences of the decisions. We developed an inexpensively populated and easily understandable water use benefit index as a tool for community-based monitoring of water related trends in the Great Barrier Reef region. The index is developed based on a comparative list of selected water-related indices integrating attributes across physico-chemical, economic, social, and ecological domains currently used in the assessment of water quality, water quantity and water use benefits in Australia. Our findings indicate that the proposed index allows the identification of water performance indicators by temporal and spatial comparisons. Benefits for decision makers and conservation practitioners include a flexible way of prioritization towards the domain with highest concern. The broader community benefits from a comprehensive and user-friendly tool, communicating changes in water quality trends more effectively.
Optimizing ATLAS code with different profilers
NASA Astrophysics Data System (ADS)
Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.
2014-06-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.
HappyFace as a generic monitoring tool for HEP experiments
NASA Astrophysics Data System (ADS)
Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard
2015-12-01
The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.
Machine assisted histogram classification
NASA Astrophysics Data System (ADS)
Benyó, B.; Gaspar, C.; Somogyi, P.
2010-04-01
LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.
Wacker, Michael A.
2010-01-01
Borehole geophysical logs were obtained from selected exploratory coreholes in the vicinity of the Florida Power and Light Company Turkey Point Power Plant. The geophysical logging tools used and logging sequences performed during this project are summarized herein to include borehole logging methods, descriptions of the properties measured, types of data obtained, and calibration information.
NASA Technical Reports Server (NTRS)
Maidel, Veronica; Stanton, Jeffrey M.
2010-01-01
This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.
Banerjee, Chiranjib; Westberg, Michael; Breitenbach, Thomas; Bregnhøj, Mikkel; Ogilby, Peter R
2017-06-06
The oxidation of lipids is an important phenomenon with ramifications for disciplines that range from food science to cell biology. The development and characterization of tools and techniques to monitor lipid oxidation are thus relevant. Of particular significance in this regard are tools that facilitate the study of oxidations at interfaces in heterogeneous samples (e.g., oil-in-water emulsions, cell membranes). In this article, we establish a proof-of-principle for methods to initiate and then monitor such oxidations with high spatial resolution. The experiments were performed using oil-in-water emulsions of polyunsaturated fatty acids (PUFAs) prepared from cod liver oil. We produced singlet oxygen at a point near the oil-water interface of a given PUFA droplet in a spatially localized two-photon photosensitized process. We then followed the oxidation reactions initiated by this process with the fluorescence-based imaging technique of structured illumination microscopy (SIM). We conclude that the approach reported herein has attributes well-suited to the study of lipid oxidation in heterogeneous samples.
Structural Health Monitoring with Fiber Bragg Grating and Piezo Arrays
NASA Technical Reports Server (NTRS)
Black, Richard J.; Faridian, Ferey; Moslehi, Behzad; Sotoudeh, Vahid
2012-01-01
Structural health monitoring (SHM) is one of the most important tools available for the maintenance, safety, and integrity of aerospace structural systems. Lightweight, electromagnetic-interference- immune, fiber-optic sensor-based SHM will play an increasing role in more secure air transportation systems. Manufacturers and maintenance personnel have pressing needs for significantly improving safety and reliability while providing for lower inspection and maintenance costs. Undetected or untreated damage may grow and lead to catastrophic structural failure. Damage can originate from the strain/stress history of the material, imperfections of domain boundaries in metals, delamination in multi-layer materials, or the impact of machine tools in the manufacturing process. Damage can likewise develop during service life from wear and tear, or under extraordinary circumstances such as with unusual forces, temperature cycling, or impact of flying objects. Monitoring and early detection are key to preventing a catastrophic failure of structures, especially when these are expected to perform near their limit conditions.
Image edge detection based tool condition monitoring with morphological component analysis.
Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng
2017-07-01
The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
De Beer, T R M; Vercruysse, P; Burggraeve, A; Quinten, T; Ouyang, J; Zhang, X; Vervaet, C; Remon, J P; Baeyens, W R G
2009-09-01
The aim of the present study was to examine the complementary properties of Raman and near infrared (NIR) spectroscopy as PAT tools for the fast, noninvasive, nondestructive and in-line process monitoring of a freeze drying process. Therefore, Raman and NIR probes were built in the freeze dryer chamber, allowing simultaneous process monitoring. A 5% (w/v) mannitol solution was used as model for freeze drying. Raman and NIR spectra were continuously collected during freeze drying (one Raman and NIR spectrum/min) and the spectra were analyzed using principal component analysis (PCA) and multivariate curve resolution (MCR). Raman spectroscopy was able to supply information about (i) the mannitol solid state throughout the entire process, (ii) the endpoint of freezing (endpoint of mannitol crystallization), and (iii) several physical and chemical phenomena occurring during the process (onset of ice nucleation, onset of mannitol crystallization). NIR spectroscopy proved to be a more sensitive tool to monitor the critical aspects during drying: (i) endpoint of ice sublimation and (ii) monitoring the release of hydrate water during storage. Furthermore, via NIR spectroscopy some Raman observations were confirmed: start of ice nucleation, end of mannitol crystallization and solid state characteristics of the end product. When Raman and NIR monitoring were performed on the same vial, the Raman signal was saturated during the freezing step caused by reflected NIR light reaching the Raman detector. Therefore, NIR and Raman measurements were done on a different vial. Also the importance of the position of the probes (Raman probe above the vial and NIR probe at the bottom of the sidewall of the vial) in order to obtain all required critical information is outlined. Combining Raman and NIR spectroscopy for the simultaneous monitoring of freeze drying allows monitoring almost all critical freeze drying process aspects. Both techniques do not only complement each other, they also provided mutual confirmation of specific conclusions.
NASA Astrophysics Data System (ADS)
Humber, M. L.; Becker-Reshef, I.; Nordling, J.; Barker, B.; McGaughey, K.
2014-12-01
The GEOGLAM Crop Monitor's Crop Assessment Tool was released in August 2013 in support of the GEOGLAM Crop Monitor's objective to develop transparent, timely crop condition assessments in primary agricultural production areas, highlighting potential hotspots of stress/bumper crops. The Crop Assessment Tool allows users to view satellite derived products, best available crop masks, and crop calendars (created in collaboration with GEOGLAM Crop Monitor partners), then in turn submit crop assessment entries detailing the crop's condition, drivers, impacts, trends, and other information. Although the Crop Assessment Tool was originally intended to collect data on major crop production at the global scale, the types of data collected are also relevant to the food security and rangelands monitoring communities. In line with the GEOGLAM Countries at Risk philosophy of "foster[ing] the coordination of product delivery and capacity building efforts for national and regional organizations, and the development of harmonized methods and tools", a modified version of the Crop Assessment Tool is being developed for the USAID Famine Early Warning Systems Network (FEWS NET). As a member of the Countries at Risk component of GEOGLAM, FEWS NET provides agricultural monitoring, timely food security assessments, and early warnings of potential significant food shortages focusing specifically on countries at risk of food security emergencies. While the FEWS NET adaptation of the Crop Assessment Tool focuses on crop production in the context of food security rather than large scale production, the data collected is nearly identical to the data collected by the Crop Monitor. If combined, the countries monitored by FEWS NET and GEOGLAM Crop Monitor would encompass over 90 countries representing the most important regions for crop production and food security.
E-media and crop nutrition monitoring
Diana C. Coburn; Ral E. Moreno
2007-01-01
Modifying media through the addition of slow-release fertilizers and other amendments is a beneficial tool in optimizing seedling performance. At our container reforestation nursery in southwest Washington, we have been enhancing our media through the incorporation of slowrelease fertilizers and other amendments in our culturing of native conifers. The concept behind...
ERIC Educational Resources Information Center
Shaw, Robert E.; And Others
1997-01-01
Proposes a theoretical framework for designing online-situated assessment tools for multimedia instructional systems. Uses a graphic method based on ecological psychology to monitor student performance through a learning activity. Explores the method's feasibility in case studies describing instructional systems teaching critical-thinking and…
A concept for performance management for Federal science programs
Whalen, Kevin G.
2017-11-06
The demonstration of clear linkages between planning, funding, outcomes, and performance management has created unique challenges for U.S. Federal science programs. An approach is presented here that characterizes science program strategic objectives by one of five “activity types”: (1) knowledge discovery, (2) knowledge development and delivery, (3) science support, (4) inventory and monitoring, and (5) knowledge synthesis and assessment. The activity types relate to performance measurement tools for tracking outcomes of research funded under the objective. The result is a multi-time scale, integrated performance measure that tracks individual performance metrics synthetically while also measuring progress toward long-term outcomes. Tracking performance on individual metrics provides explicit linkages to root causes of potentially suboptimal performance and captures both internal and external program drivers, such as customer relations and science support for managers. Functionally connecting strategic planning objectives with performance measurement tools is a practical approach for publicly funded science agencies that links planning, outcomes, and performance management—an enterprise that has created unique challenges for public-sector research and development programs.
Electroencephalographic monitoring of complex mental tasks
NASA Technical Reports Server (NTRS)
Guisado, Raul; Montgomery, Richard; Montgomery, Leslie; Hickey, Chris
1992-01-01
Outlined here is the development of neurophysiological procedures to monitor operators during the performance of cognitive tasks. Our approach included the use of electroencepalographic (EEG) and rheoencephalographic (REG) techniques to determine changes in cortical function associated with cognition in the operator's state. A two channel tetrapolar REG, a single channel forearm impedance plethysmograph, a Lead I electrocardiogram (ECG) and a 21 channel EEG were used to measure subject responses to various visual-motor cognitive tasks. Testing, analytical, and display procedures for EEG and REG monitoring were developed that extend the state of the art and provide a valuable tool for the study of cerebral circulatory and neural activity during cognition.
An HTML Tool for Production of Interactive Stereoscopic Compositions.
Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi
2016-12-01
The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.
A Study of Topic and Topic Change in Conversational Threads
2009-09-01
AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND...ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES...unigrams. By converting documents to a vector space representations, the tools of geometry and algebra can be applied, and questions of difference
2011-07-01
fluid resistivity , temperature logging, and flow metering at other sites that typically indicated only two or three active fractures in each hole...was consistent with results of conventional borehole fluid resistivity , temperature logging, and flow metering at other sites that typically indicated...following tests were performed in each boundary monitoring well: ■ Gamma Ray; ■ Spontaneous Potential (SP); ■ Single Point Resistance (SPR
Quality Assurance System. Volume 1. Report (Technology Transfer Program)
1980-03-03
WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Building...192 Room 128-9500 MacArthur Blvd Bethesda, MD 20817-5700 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS...presented herein. TABLE OF CONTENTS VOLUME I - FINDINGS AND CONCLUSIONS SECTION PARAGRAPH TITLE 1 INTRODUCTION 1.1 Purpose and Scope 1.2 Organization of
Hawken, Susan J; Stasiak, Karolina; Lucassen, Mathijs FG; Fleming, Theresa; Shepherd, Matthew; Greenwood, Andrea; Osborne, Raechel; Merry, Sally N
2017-01-01
Background Computerized cognitive behavioral therapy (cCBT) is an acceptable and promising treatment modality for adolescents with mild-to-moderate depression. Many cCBT programs are standalone packages with no way for clinicians to monitor progress or outcomes. We sought to develop an electronic monitoring (e-monitoring) tool in consultation with clinicians and adolescents to allow clinicians to monitor mood, risk, and treatment adherence of adolescents completing a cCBT program called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts). Objective The objectives of our study were as follows: (1) assess clinicians’ and adolescents’ views on using an e-monitoring tool and to use this information to help shape the development of the tool and (2) assess clinician experiences with a fully developed version of the tool that was implemented in their clinical service. Methods A descriptive qualitative study using semistructured focus groups was conducted in New Zealand. In total, 7 focus groups included clinicians (n=50) who worked in primary care, and 3 separate groups included adolescents (n=29). Clinicians were general practitioners (GPs), school guidance counselors, clinical psychologists, youth workers, and nurses. Adolescents were recruited from health services and a high school. Focus groups were run to enable feedback at 3 phases that corresponded to the consultation, development, and postimplementation stages. Thematic analysis was applied to transcribed responses. Results Focus groups during the consultation and development phases revealed the need for a simple e-monitoring registration process with guides for end users. Common concerns were raised in relation to clinical burden, monitoring risk (and effects on the therapeutic relationship), alongside confidentiality or privacy and technical considerations. Adolescents did not want to use their social media login credentials for e-monitoring, as they valued their privacy. However, adolescents did want information on seeking help and personalized monitoring and communication arrangements. Postimplementation, clinicians who had used the tool in practice revealed no adverse impact on the therapeutic relationship, and adolescents were not concerned about being e-monitored. Clinicians did need additional time to monitor adolescents, and the e-monitoring tool was used in a different way than was originally anticipated. Also, it was suggested that the registration process could be further streamlined and integrated with existing clinical data management systems, and the use of clinician alerts could be expanded beyond the scope of simply flagging adolescents of concern. Conclusions An e-monitoring tool was developed in consultation with clinicians and adolescents. However, the study revealed the complexity of implementing the tool in clinical practice. Of salience were privacy, parallel monitoring systems, integration with existing electronic medical record systems, customization of the e-monitor, and preagreed monitoring arrangements between clinicians and adolescents. PMID:28077345
Hetrick, Sarah E; Dellosa, Maria Kristina; Simmons, Magenta B; Phillips, Lisa
2015-02-01
To develop and examine the feasibility of an online monitoring tool of depressive symptoms, suicidality and side effects. The online tool was developed based on guideline recommendations, and employed already validated and widely used measures. Quantitative data about its use, and qualitative information on its functionality and usefulness were collected from surveys, a focus group and individual interviews. Fifteen young people completed the tool between 1 and 12 times, and reported it was easy to use. Clinicians suggested it was too long and could be completed in the waiting room to lessen impact on session time. Overall, clients and clinicians who used the tool found it useful. Results show that an online monitoring tool is potentially useful as a systematic means for monitoring symptoms, but further research is needed including how to embed the tool within clinical practice. © 2014 Wiley Publishing Asia Pty Ltd.
CubeSat constellation design for air traffic monitoring
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Rios, Joseph L.; Gerhardt, David; Pham, Camvu
2016-11-01
Suitably equipped global and local air traffic can be tracked. The tracking information may then be used for control from ground-based stations by receiving the Automatic Dependent Surveillance-Broadcast (ADS-B) signal. In this paper, we describe a tool for designing a constellation of small satellites which demonstrates, through high-fidelity modeling based on simulated air traffic data, the value of space-based ADS-B monitoring. It thereby provides recommendations for cost-efficient deployment of a constellation of small satellites to increase safety and situational awareness in the currently poorly-served surveillance area of Alaska. Air traffic data were obtained from NASA's Future ATM Concepts Evaluation Tool, for the Alaskan airspace over one day. The results presented were driven by MATLAB and the satellites propagated and coverage calculated using AGI's Satellite Tool. While Ad-hoc and precession spread constellations have been quantitatively evaluated, Walker constellations show the best performance in simulation. Sixteen satellites in two perpendicular orbital planes are shown to provide more than 99% coverage over representative Alaskan airspace and the maximum time gap where any airplane in Alaska is not covered is six minutes, therefore meeting the standard set by the International Civil Aviation Organization to monitor every airplane at least once every fifteen minutes. In spite of the risk of signal collision when multiple packets arrive at the satellite receiver, the proposed constellation shows 99% cumulative probability of reception within four minutes when the airplanes are transmitting every minute, and at 100% reception probability if transmitting every second. Data downlink can be performed using any of the three ground stations of NASA Earth Network in Alaska.
Souza, João Paulo; Oladapo, Olufemi T; Bohren, Meghan A; Mugerwa, Kidza; Fawole, Bukola; Moscovici, Leonardo; Alves, Domingos; Perdona, Gleici; Oliveira-Ciabati, Livia; Vogel, Joshua P; Tunçalp, Özge; Zhang, Jim; Hofmeyr, Justus; Bahl, Rajiv; Gülmezoglu, A Metin
2015-05-26
The partograph is currently the main tool available to support decision-making of health professionals during labour. However, the rate of appropriate use of the partograph is disappointingly low. Apart from limitations that are associated with partograph use, evidence of positive impact on labour-related health outcomes is lacking. The main goal of this study is to develop a Simplified, Effective, Labour Monitoring-to-Action (SELMA) tool. The primary objectives are: to identify the essential elements of intrapartum monitoring that trigger the decision to use interventions aimed at preventing poor labour outcomes; to develop a simplified, monitoring-to-action algorithm for labour management; and to compare the diagnostic performance of SELMA and partograph algorithms as tools to identify women who are likely to develop poor labour-related outcomes. A prospective cohort study will be conducted in eight health facilities in Nigeria and Uganda (four facilities from each country). All women admitted for vaginal birth will comprise the study population (estimated sample size: 7,812 women). Data will be collected on maternal characteristics on admission, labour events and pregnancy outcomes by trained research assistants at the participating health facilities. Prediction models will be developed to identify women at risk of intrapartum-related perinatal death or morbidity (primary outcomes) throughout the course of labour. These predictions models will be used to assemble a decision-support tool that will be able to suggest the best course of action to avert adverse outcomes during the course of labour. To develop this set of prediction models, we will use up-to-date techniques of prognostic research, including identification of important predictors, assigning of relative weights to each predictor, estimation of the predictive performance of the model through calibration and discrimination, and determination of its potential for application using internal validation techniques. This research offers an opportunity to revisit the theoretical basis of the partograph. It is envisioned that the final product would help providers overcome the challenging tasks of promptly interpreting complex labour information and deriving appropriate clinical actions, and thus increase efficiency of the care process, enhance providers' competence and ultimately improve labour outcomes. Please see related articles ' http://dx.doi.org/10.1186/s12978-015-0027-6 ' and ' http://dx.doi.org/10.1186/s12978-015-0028-5 '.
Malaria rapid diagnostic tests in elimination settings—can they find the last parasite?
McMorrow, M. L.; Aidoo, M.; Kachur, S. P.
2016-01-01
Rapid diagnostic tests (RDTs) for malaria have improved the availability of parasite-based diagnosis throughout the malaria-endemic world. Accurate malaria diagnosis is essential for malaria case management, surveillance, and elimination. RDTs are inexpensive, simple to perform, and provide results in 15–20 min. Despite high sensitivity and specificity for Plasmodium falciparum infections, RDTs have several limitations that may reduce their utility in low-transmission settings: they do not reliably detect low-density parasitaemia (≤200 parasites/μL), many are less sensitive for Plasmodium vivax infections, and their ability to detect Plasmodium ovale and Plasmodium malariae is unknown. Therefore, in elimination settings, alternative tools with higher sensitivity for low-density infections (e.g. nucleic acid-based tests) are required to complement field diagnostics, and new highly sensitive and specific field-appropriate tests must be developed to ensure accurate diagnosis of symptomatic and asymptomatic carriers. As malaria transmission declines, the proportion of low-density infections among symptomatic and asymptomatic persons is likely to increase, which may limit the utility of RDTs. Monitoring malaria in elimination settings will probably depend on the use of more than one diagnostic tool in clinical-care and surveillance activities, and the combination of tools utilized will need to be informed by regular monitoring of test performance through effective quality assurance. PMID:21910780
Real-time seismic monitoring needs of a building owner - And the solution: A cooperative effort
Celebi, M.; Sanli, A.; Sinclair, M.; Gallant, S.; Radulescu, D.
2004-01-01
A recently implemented advanced seismic monitoring system for a 24-story building facilitates recording of accelerations and computing displacements and drift ratios in near-real time to measure the earthquake performance of the building. The drift ratio is related to the damage condition of the specific building. This system meets the owner's needs for rapid quantitative input to assessments and decisions on post-earthquake occupancy. The system is now successfully working and, in absence of strong shaking to date, is producing low-amplitude data in real time for routine analyses and assessment. Studies of such data to date indicate that the configured monitoring system with its building specific software can be a useful tool in rapid assessment of buildings and other structures following an earthquake. Such systems can be used for health monitoring of a building, for assessing performance-based design and analyses procedures, for long-term assessment of structural characteristics, and for long-term damage detection.
Investigation of Data Fusion Applied to Health Monitoring of Wind Turbine Drive train Components
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Sheng, Shuangwen
2011-01-01
The research described was performed on diagnostic tools used to detect damage to dynamic mechanical components in a wind turbine gearbox. Different monitoring technologies were evaluated by collecting vibration and oil debris data from tests performed on a "healthy" gearbox and a damaged gearbox in a dynamometer test stand located at the National Renewable Energy Laboratory. The damaged gearbox tested was removed from the field after experiencing component damage due to two losses of oil events and was retested under controlled conditions in the dynamometer test stand. Preliminary results indicate oil debris and vibration can be integrated to assess the health of the wind turbine gearbox.
Balahbib, Abdelaali; Amarir, Fatima; Corstjens, Paul L A M; de Dood, Claudia J; van Dam, Govert J; Hajli, Amina; Belhaddad, Meryem; El Mansouri, Bouchra; Sadak, Abderrahim; Rhajaoui, Mohamed; Adlaoui, El Bachir
2017-04-06
After alleged stop of transmission of schistosomiasis and further down the line in post elimination settings, sensitive tools are required to monitor infection status to prevent potential re-emergence. In Rahala, where transmission cycle of Schistosoma haematobium is interrupted since 2004 but where 30% of snails are still infected by S. bovis, potential human S. bovis infection can't be excluded. As methods based on egg-counts do not provide the required sensitivity, antibody or antigen assays are envisaged as the most appropriate tools for this type of monitoring. In this pilot study, the performances of three assays were compared: two commercially available antibody tests (ELISA and haemagglutination format) indicating exposure, and an antigen test (lateral flow strip format) demonstrating active infection. All 37 recruited study participants resided in Rahala (Akka, province Tata, Morocco). Participants had been diagnosed and cured from schistosomiasis in the period between 1983 and 2003. In 2015 these asymptomatic participants provided fresh clinical samples (blood and urine) for analysis with the aforementioned diagnostics tests. No eggs were identified in the urine of the 37 participants. The haemagglutination test indicated 6 antibody positives whereas the ELISA indicated 28 antibody positives, one indecisive and one false positive. ELISA and haemagglutination results matched for 18 individuals, amongst which 5 out of 6 haemagglutination positives. With the antigen test (performed on paired serum and urine samples), serum from two participants (cured 21 and 32 years ago) indicated the presence of low levels of the highly specific Schistosoma circulating anodic antigen (CAA), demonstrating low worm level infections (less than 5 pg/ml corresponding to probably single worm pair). One tested also CAA positive with urine. ELISA indicated the presence of human anti-Schistosoma antibodies in these two CAA positive cases, haemagglutination results were negative. To prevent reemergence of schistosomiasis in Morocco current monitoring programs require specific protocols that include testing of antibody positives for active infection by the UCP-LF CAA test, the appropriate diagnostic tool to identify Schistosoma low grade infections in travelers, immigrants and assumed cured cases. The test is genus specific will also identify infections related to S. bovis.
Making intelligent systems team players. A guide to developing intelligent monitoring systems
NASA Technical Reports Server (NTRS)
Land, Sherry A.; Malin, Jane T.; Thronesberry, Carroll; Schreckenghost, Debra L.
1995-01-01
This reference guide for developers of intelligent monitoring systems is based on lessons learned by developers of the DEcision Support SYstem (DESSY), an expert system that monitors Space Shuttle telemetry data in real time. DESSY makes inferences about commands, state transitions, and simple failures. It performs failure detection rather than in-depth failure diagnostics. A listing of rules from DESSY and cue cards from DESSY subsystems are included to give the development community a better understanding of the selected model system. The G-2 programming tool used in developing DESSY provides an object-oriented, rule-based environment, but many of the principles in use here can be applied to any type of monitoring intelligent system. The step-by-step instructions and examples given for each stage of development are in G-2, but can be used with other development tools. This guide first defines the authors' concept of real-time monitoring systems, then tells prospective developers how to determine system requirements, how to build the system through a combined design/development process, and how to solve problems involved in working with real-time data. It explains the relationships among operational prototyping, software evolution, and the user interface. It also explains methods of testing, verification, and validation. It includes suggestions for preparing reference documentation and training users.
Adapting HIV patient and program monitoring tools for chronic non-communicable diseases in Ethiopia.
Letebo, Mekitew; Shiferaw, Fassil
2016-06-02
Chronic non-communicable diseases (NCDs) have become a huge public health concern in developing countries. Many resource-poor countries facing this growing epidemic, however, lack systems for an organized and comprehensive response to NCDs. Lack of NCD national policy, strategies, treatment guidelines and surveillance and monitoring systems are features of health systems in many developing countries. Successfully responding to the problem requires a number of actions by the countries, including developing context-appropriate chronic care models and programs and standardization of patient and program monitoring tools. In this cross-sectional qualitative study we assessed existing monitoring and evaluation (M&E) tools used for NCD services in Ethiopia. Since HIV care and treatment program is the only large-scale chronic care program in the country, we explored the M&E tools being used in the program and analyzed how these tools might be adapted to support NCD services in the country. Document review and in-depth interviews were the main data collection methods used. The interviews were held with health workers and staff involved in data management purposively selected from four health facilities with high HIV and NCD patient load. Thematic analysis was employed to make sense of the data. Our findings indicate the apparent lack of information systems for NCD services, including the absence of standardized patient and program monitoring tools to support the services. We identified several HIV care and treatment patient and program monitoring tools currently being used to facilitate intake process, enrolment, follow up, cohort monitoring, appointment keeping, analysis and reporting. Analysis of how each tool being used for HIV patient and program monitoring can be adapted for supporting NCD services is presented. Given the similarity between HIV care and treatment and NCD services and the huge investment already made to implement standardized tools for HIV care and treatment program, adaptation and use of HIV patient and program monitoring tools for NCD services can improve NCD response in Ethiopia through structuring services, standardizing patient care and treatment, supporting evidence-based planning and providing information on effectiveness of interventions.
Integrating Oil Debris and Vibration Gear Damage Detection Technologies Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Afjeh, Abdollah A.
2002-01-01
A diagnostic tool for detecting damage to spur gears was developed. Two different measurement technologies, wear debris analysis and vibration, were integrated into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual measurement technologies. This diagnostic tool was developed and evaluated experimentally by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Test Rig. Experimental data were collected during experiments performed in this test rig with and without pitting. Results show combining the two measurement technologies improves the detection of pitting damage on spur gears.
Data management system advanced development
NASA Technical Reports Server (NTRS)
Douglas, Katherine; Humphries, Terry
1990-01-01
The Data Management System (DMS) Advanced Development task provides for the development of concepts, new tools, DMS services, and for the testing of the Space Station DMS hardware and software. It also provides for the development of techniques capable of determining the effects of system changes/enhancements, additions of new technology, and/or hardware and software growth on system performance. This paper will address the built-in characteristics which will support network monitoring requirements in the design of the evolving DMS network implementation, functional and performance requirements for a real-time, multiprogramming, multiprocessor operating system, and the possible use of advanced development techniques such as expert systems and artificial intelligence tools in the DMS design.
Persuasive Performance Feedback: The Effect of Framing on Self-Efficacy
Choe, Eun Kyoung; Lee, Bongshin; Munson, Sean; Pratt, Wanda; Kientz, Julie A.
2013-01-01
Self-monitoring technologies have proliferated in recent years as they offer excellent potential for promoting healthy behaviors. Although these technologies have varied ways of providing real-time feedback on a user’s current progress, we have a dearth of knowledge of the framing effects on the performance feedback these tools provide. With an aim to create influential, persuasive performance feedback that will nudge people toward healthy behaviors, we conducted an online experiment to investigate the effect of framing on an individual’s self-efficacy. We identified 3 different types of framing that can be applicable in presenting performance feedback: (1) the valence of performance (remaining vs. achieved framing), (2) presentation type (text-only vs. text with visual), and (3) data unit (raw vs. percentage). Results show that the achieved framing could lead to an increased perception of individual’s performance capabilities. This work provides empirical guidance for creating persuasive performance feedback, thereby helping people designing self-monitoring technologies to promote healthy behaviors. PMID:24551378
Persuasive performance feedback: the effect of framing on self-efficacy.
Choe, Eun Kyoung; Lee, Bongshin; Munson, Sean; Pratt, Wanda; Kientz, Julie A
2013-01-01
Self-monitoring technologies have proliferated in recent years as they offer excellent potential for promoting healthy behaviors. Although these technologies have varied ways of providing real-time feedback on a user's current progress, we have a dearth of knowledge of the framing effects on the performance feedback these tools provide. With an aim to create influential, persuasive performance feedback that will nudge people toward healthy behaviors, we conducted an online experiment to investigate the effect of framing on an individual's self-efficacy. We identified 3 different types of framing that can be applicable in presenting performance feedback: (1) the valence of performance (remaining vs. achieved framing), (2) presentation type (text-only vs. text with visual), and (3) data unit (raw vs. percentage). Results show that the achieved framing could lead to an increased perception of individual's performance capabilities. This work provides empirical guidance for creating persuasive performance feedback, thereby helping people designing self-monitoring technologies to promote healthy behaviors.
Advanced data management system architectures testbed
NASA Technical Reports Server (NTRS)
Grant, Terry
1990-01-01
The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.
Integrating reliability and maintainability into a concurrent engineering environment
NASA Astrophysics Data System (ADS)
Phillips, Clifton B.; Peterson, Robert R.
1993-02-01
This paper describes the results of a reliability and maintainability study conducted at the University of California, San Diego and supported by private industry. Private industry thought the study was important and provided the university access to innovative tools under cooperative agreement. The current capability of reliability and maintainability tools and how they fit into the design process is investigated. The evolution of design methodologies leading up to today's capability is reviewed for ways to enhance the design process while keeping cost under control. A method for measuring the consequences of reliability and maintainability policy for design configurations in an electronic environment is provided. The interaction of selected modern computer tool sets is described for reliability, maintainability, operations, and other elements of the engineering design process. These tools provide a robust system evaluation capability that brings life cycle performance improvement information to engineers and their managers before systems are deployed, and allow them to monitor and track performance while it is in operation.
CNC machine tool's wear diagnostic and prognostic by using dynamic Bayesian networks
NASA Astrophysics Data System (ADS)
Tobon-Mejia, D. A.; Medjaher, K.; Zerhouni, N.
2012-04-01
The failure of critical components in industrial systems may have negative consequences on the availability, the productivity, the security and the environment. To avoid such situations, the health condition of the physical system, and particularly of its critical components, can be constantly assessed by using the monitoring data to perform on-line system diagnostics and prognostics. The present paper is a contribution on the assessment of the health condition of a computer numerical control (CNC) tool machine and the estimation of its remaining useful life (RUL). The proposed method relies on two main phases: an off-line phase and an on-line phase. During the first phase, the raw data provided by the sensors are processed to extract reliable features. These latter are used as inputs of learning algorithms in order to generate the models that represent the wear's behavior of the cutting tool. Then, in the second phase, which is an assessment one, the constructed models are exploited to identify the tool's current health state, predict its RUL and the associated confidence bounds. The proposed method is applied on a benchmark of condition monitoring data gathered during several cuts of a CNC tool. Simulation results are obtained and discussed at the end of the paper.
Heumann, Frederick K.; Wilkinson, Jay C.; Wooding, David R.
1997-01-01
A remote appliance for supporting a tool for performing work at a worksite on a substantially circular bore of a workpiece and for providing video signals of the worksite to a remote monitor comprising: a baseplate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the baseplate and positioned to roll against the bore of the workpiece when the baseplate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the baseplate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the baseplate such that the working end of the tool is positioned on the inner face side of the baseplate; a camera for providing video signals of the worksite to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the baseplate, the camera holding means being adjustably attached to the outer face of the baseplate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris.
NASA Astrophysics Data System (ADS)
Vasudevan, Srivathsan; Chen, George Chung Kit; Andika, Marta; Agarwal, Shuchi; Chen, Peng; Olivo, Malini
2010-09-01
Red blood cells (RBCs) have been found to undergo ``programmed cell death,'' or eryptosis, and understanding this process can provide more information about apoptosis of nucleated cells. Photothermal (PT) response, a label-free photothermal noninvasive technique, is proposed as a tool to monitor the cell death process of living human RBCs upon glucose depletion. Since the physiological status of the dying cells is highly sensitive to photothermal parameters (e.g., thermal diffusivity, absorption, etc.), we applied linear PT response to continuously monitor the death mechanism of RBC when depleted of glucose. The kinetics of the assay where the cell's PT response transforms from linear to nonlinear regime is reported. In addition, quantitative monitoring was performed by extracting the relevant photothermal parameters from the PT response. Twofold increases in thermal diffusivity and size reduction were found in the linear PT response during cell death. Our results reveal that photothermal parameters change earlier than phosphatidylserine externalization (used for fluorescent studies), allowing us to detect the initial stage of eryptosis in a quantitative manner. Hence, the proposed tool, in addition to detection of eryptosis earlier than fluorescence, could also reveal physiological status of the cells through quantitative photothermal parameter extraction.
Active Job Monitoring in Pilots
NASA Astrophysics Data System (ADS)
Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas
2015-12-01
Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.
Development of New Sensing Materials Using Combinatorial and High-Throughput Experimentation
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Mirsky, Vladimir M.
New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.
Wang, Ya-Wen; Liu, Yan-Ling; Xu, Jia-Quan; Qin, Yu; Huang, Wei-Hua
2018-05-15
Stretchable electrochemical (EC) sensors have broad prospects in real-time monitoring of living cells and tissues owing to their excellent elasticity and deformability. However, the redox reaction products and cell secretions are easily adsorbed on the electrode, resulting in sensor fouling and passivation. Herein, we developed a stretchable and photocatalytically renewable EC sensor based on Au nanotubes (NTs) and TiO 2 nanowires (NWs) sandwich nanonetworks. The external Au NTs are used for EC sensing, and internal TiO 2 NWs provide photocatalytic performance to degrade contaminants, which endows the sensor with excellent EC performance, high photocatalytic activity, and favorable mechanical tensile property. This allows highly sensitive recycling monitoring of NO released from endothelial cells and 5-HT released from mast cells under their stretching states in real time, therefore providing a promising tool to unravel elastic and mechanically sensitive cells, tissues, and organs.
McKanna, James A; Pavel, Misha; Jimison, Holly
2010-11-13
Assessment of cognitive functionality is an important aspect of care for elders. Unfortunately, few tools exist to measure divided attention, the ability to allocate attention to different aspects of tasks. An accurate determination of divided attention would allow inference of generalized cognitive decline, as well as providing a quantifiable indicator of an important component of driving skill. We propose a new method for determining relative divided attention ability through unobtrusive monitoring of computer use. Specifically, we measure performance on a dual-task cognitive computer exercise as part of a health coaching intervention. This metric indicates whether the user has the ability to pay attention to both tasks at once, or is primarily attending to one task at a time (sacrificing optimal performance). The monitoring of divided attention in a home environment is a key component of both the early detection of cognitive problems and for assessing the efficacy of coaching interventions.
Maritime Situational Awareness: The MARISS Experience
NASA Astrophysics Data System (ADS)
Margarit, G.; Tabasco, A.; Gomez, C.
2010-04-01
This paper presents the operational solution developed by GMV to provide support to maritime situational awareness via Earth Observation (EO) technologies. The concept falls on integrating the information retrieved from Synthetic Aperture Radar (SAR) images and transponder-based polls (AIS and similar) in an advanced GeoPortal web. The service has been designed in the framework of the MARISS project, a project conceived to help improving ship monitoring with the support of a large user segment. In this context, the interaction with official agencies has provided good feedback about system performance and its usefulness in supporting monitoring and surveillance tasks. Some representative samples are analyzed along the paper in order to validate key kernel utilities, such as ship and coastline detection, and ship classification. They justify the promotion of extended R&D activities to increase monitoring performance and to include advanced added- value tools, such as decision making and route tracking.
Investigation of Bearing Fatigue Damage Life Prediction Using Oil Debris Monitoring
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Bolander, Nathan; Haynes, Chris; Toms, Allison M.
2011-01-01
Research was performed to determine if a diagnostic tool for detecting fatigue damage of helicopter tapered roller bearings can be used to determine remaining useful life (RUL). The taper roller bearings under study were installed on the tail gearbox (TGB) output shaft of UH- 60M helicopters, removed from the helicopters and subsequently installed in a bearing spall propagation test rig. The diagnostic tool was developed and evaluated experimentally by collecting oil debris data during spall progression tests on four bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of pitting damage. Results from the four bearings tested indicate that measuring the debris generated when a bearing outer race begins to spall can be used to indicate bearing damage progression and remaining bearing life.
NASA Astrophysics Data System (ADS)
Prasad, Balla Srinivasa; Prabha, K. Aruna; Kumar, P. V. S. Ganesh
2017-03-01
In metal cutting machining, major factors that affect the cutting tool life are machine tool vibrations, tool tip/chip temperature and surface roughness along with machining parameters like cutting speed, feed rate, depth of cut, tool geometry, etc., so it becomes important for the manufacturing industry to find the suitable levels of process parameters for obtaining maintaining tool life. Heat generation in cutting was always a main topic to be studied in machining. Recent advancement in signal processing and information technology has resulted in the use of multiple sensors for development of the effective monitoring of tool condition monitoring systems with improved accuracy. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In the present work, a real time process monitoring method is explored using multiple sensors. It focuses on the development of a test bed for monitoring the tool condition in turning of AISI 316L steel by using both coated and uncoated carbide inserts. Proposed tool condition monitoring (TCM) is evaluated in the high speed turning using multiple sensors such as Laser Doppler vibrometer and infrared thermography technique. The results indicate the feasibility of using the dominant frequency of the vibration signals for the monitoring of high speed turning operations along with temperatures gradient. A possible correlation is identified in both regular and irregular cutting tool wear. While cutting speed and feed rate proved to be influential parameter on the depicted temperatures and depth of cut to be less influential. Generally, it is observed that lower heat and temperatures are generated when coated inserts are employed. It is found that cutting temperatures are gradually increased as edge wear and deformation developed.
Portable monitoring for the diagnosis of obstructive sleep apnea.
Collop, Nancy A
2008-11-01
The demand for expedient diagnosis of suspected obstructive sleep apnea (OSA) has increased due to improved awareness of sleep disorders. Polysomnography (PSG) is the current preferred diagnostic modality but is relatively inconvenient, expensive and inefficient. Portable monitoring has been developed and is widely used in countries outside the United States as an alternative approach. A portable monitor records fewer physiologic variables but is typically unattended and can be performed in the home. Numerous portable monitor studies have been performed over the past two to three decades. The US government and medical societies have extensively reviewed this literature several times in an attempt to determine if portable monitoring should be more broadly used for diagnosing OSA. In March 2008, the US Centers for Medicare and Medicaid Services released a statement allowing the use of portable monitoring to diagnose OSA and prescribe continuous positive airway pressure. This has potentially opened the door for more widespread use of these devices. This review will focus on the literature that has examined portable monitoring as a diagnostic tool for OSA. It is anticipated that portable monitoring as a diagnostic modality for OSA will be used more frequently in the United States following the Centers for Medicare and Medicaid Services ruling. Physicians and others considering the use of portable monitors should thoroughly understand the advantages and limitations of this technology.
Weingart, Saul N; Yaghi, Omar; Wetherell, Matthew; Sweeney, Megan
2018-04-10
To examine the composition and concordance of existing instruments used to assess medical teams' performance. A trained observer joined 20 internal medicine housestaff teams for morning work rounds at Tufts Medical Center, a 415-bed Boston teaching hospital, from October through December 2015. The observer rated each team's performance using 9 teamwork observation instruments that examined domains including team structure, leadership, situation monitoring, mutual support, and communication. Observations recorded on paper forms were stored electronically. Scores were normalized from 1 (low) to 5 (high) to account for different rating scales. Overall mean scores were calculated and graphed; weighted scores adjusted for the number of items in each teamwork domain. Teamwork scores were analyzed using t-tests, pair-wise correlations, and the Kruskal-Wallis statistic, and team performance was compared across instruments by domain. The 9 tools incorporated 5 major domains, with 5-35 items per instrument for a total of 161 items per observation session. In weighted and unweighted analyses, the overall teamwork performance score for a given team on a given day varied by instrument. While all of the tools identified the same low outlier, high performers on some instruments were low performers on others. Inconsistent scores for a given team across instruments persisted in domain-level analyses. There was substantial variation in the rating of individual teams assessed concurrently by a single observer using multiple instruments. Since existing teamwork observation tools do not yield concordant assessments, researchers should create better tools for measuring teamwork performance.
Topological and Geometric Tools for the Analysis fo Complex Networks
2013-10-01
CONTRACT NUMBER FA 9550-09-1-0090 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Ali Jadbabaie (Penn) Shing-Tung Yau (Harvard) Fan Chung...NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) University of Pennsylvania 34th and Spruce Street, Philadelphia 19104-6303 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) Air Force Office of Scientific Research 875 North Randolph Street
ERIC Educational Resources Information Center
Zhou, P.; Ang, B. W.
2009-01-01
Composite indicators have been increasingly recognized as a useful tool for performance monitoring, benchmarking comparisons and public communication in a wide range of fields. The usefulness of a composite indicator depends heavily on the underlying data aggregation scheme where multiple criteria decision analysis (MCDA) is commonly used. A…
Stable isotope analyses of stream organisms usually are performed as discrete site experiments (e.g., to study the effect of a direct manipulation), synoptically (e.g. to illustrate effects of longitudinal variation of influencing factors), or, less frequently, over the course of...
Stable isotope analyses of stream organisms are performed usually as discrete site experiments (e.g., to study the effect of a direct manipulation), synoptically (e.g. to illustrate effects of longitudinal variation of influencing factors), or, less frequently, over the course of...
Evaluation Checklist for Student Writing in Grades K-3, Ottawa County.
ERIC Educational Resources Information Center
Ottawa County Office of Education, OH.
Developed to assist teachers in Ottawa County, Ohio, in monitoring students' pupil performance objectives (PPOs) in grades K-3, this writing evaluation form is the primary record keeping tool in the Competency Based Education (CBE) Program. The form consists of: (1) the evaluation checklist; (2) the intervention code; and (3) record keeping…
Fuady, Ahmad; Houweling, Tanja A; Mansyur, Muchtaruddin; Richardus, Jan H
2018-01-01
Indonesia is the second-highest country for tuberculosis (TB) incidence worldwide. Hence, it urgently requires improvements and innovations beyond the strategies that are currently being implemented throughout the country. One fundamental step in monitoring its progress is by preparing a validated tool to measure total patient costs and catastrophic total costs. The World Health Organization (WHO) recommends using a version of the generic questionnaire that has been adapted to the local cultural context in order to interpret findings correctly. This study is aimed to adapt the Tool to Estimate Patient Costs questionnaire into the Indonesian context, which measures total costs and catastrophic total costs for tuberculosis-affected households. the tool was adapted using best-practice guidelines. On the basis of a pre-test performed in a previous study (referred to as Phase 1 Study), we refined the adaptation process by comparing it with the generic tool introduced by the WHO. We also held an expert committee review and performed pre-testing by interviewing 30 TB patients. After pre-testing, the tool was provided with complete explanation sheets for finalization. seventy-two major changes were made during the adaptation process including changing the answer choices to match the Indonesian context, refining the flow of questions, deleting questions, changing some words and restoring original questions that had been changed in Phase 1 Study. Participants indicated that most questions were clear and easy to understand. To address recall difficulties by the participants, we made some adaptations to obtain data that might be missing, such as tracking data to medical records, developing a proxy of costs and guiding interviewers to ask for a specific value when participants were uncertain about the estimated market value of property they had sold. the adapted Tool to Estimate Patient Costs in Bahasa Indonesia is comprehensive and ready for use in future studies on TB-related catastrophic costs and is suitable for monitoring progress to achieve the target of the End TB Strategy.
Unobtrusive Monitoring of Spaceflight Team Functioning
NASA Technical Reports Server (NTRS)
Maidel, Veronica; Stanton, Jeffrey M.
2010-01-01
This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.
Mobile phone tools for field-based health care workers in low-income countries.
Derenzi, Brian; Borriello, Gaetano; Jackson, Jonathan; Kumar, Vikram S; Parikh, Tapan S; Virk, Pushwaz; Lesh, Neal
2011-01-01
In low-income regions, mobile phone-based tools can improve the scope and efficiency of field health workers. They can also address challenges in monitoring and supervising a large number of geographically distributed health workers. Several tools have been built and deployed in the field, but little comparison has been done to help understand their effectiveness. This is largely because no framework exists in which to analyze the different ways in which the tools help strengthen existing health systems. In this article we highlight 6 key functions that health systems currently perform where mobile tools can provide the most benefit. Using these 6 health system functions, we compare existing applications for community health workers, an important class of field health workers who use these technologies, and discuss common challenges and lessons learned about deploying mobile tools. © 2011 Mount Sinai School of Medicine.
THE DESIGN OF PERFORMANCE MONITORING SYSTEMS IN HEALTHCARE ORGANIZATIONS: A STAKEHOLDER PERSPECTIVE.
Rouhana, Rima E; Van Caillie, Didier
2016-01-01
Monitoring hospitals performance is evolving over time in search of more efficiency by integrating additional levels of care, reducing costs and keeping staff up-to-date. To fulfill these three potentially divergent aspects and to monitor performance, healthcare administrators are using dissimilar management control tools. To explain why, we suggest to go beyond traditional contingent factors to assess the role of the different stakeholders that are at the heart of any healthcare organization. We rely first on seminal studies to appraise the role of the main healthcare players and their influence on some organizational attributes. We then consider the managerial awareness and the perception of a suitable management system to promote a strategy-focused organization. Our methodology is based on a qualitative approach of twenty-two case studies, led in two heterogeneous environments (Belgium and Lebanon), comparing the managerial choice of a management system within three different healthcare organizational structures. Our findings allow us to illustrate, for each healthcare player, his positioning within the healthcare systems. Thus, we define how his role, perception and responsiveness manipulate the organization's internal climate and shape the design of the performance monitoring systems. In particular, we highlight the managerial role and influence on the choice of an adequate management system.
myBrain: a novel EEG embedded system for epilepsy monitoring.
Pinho, Francisco; Cerqueira, João; Correia, José; Sousa, Nuno; Dias, Nuno
2017-10-01
The World Health Organisation has pointed that a successful health care delivery, requires effective medical devices as tools for prevention, diagnosis, treatment and rehabilitation. Several studies have concluded that longer monitoring periods and outpatient settings might increase diagnosis accuracy and success rate of treatment selection. The long-term monitoring of epileptic patients through electroencephalography (EEG) has been considered a powerful tool to improve the diagnosis, disease classification, and treatment of patients with such condition. This work presents the development of a wireless and wearable EEG acquisition platform suitable for both long-term and short-term monitoring in inpatient and outpatient settings. The developed platform features 32 passive dry electrodes, analogue-to-digital signal conversion with 24-bit resolution and a variable sampling frequency from 250 Hz to 1000 Hz per channel, embedded in a stand-alone module. A computer-on-module embedded system runs a Linux ® operating system that rules the interface between two software frameworks, which interact to satisfy the real-time constraints of signal acquisition as well as parallel recording, processing and wireless data transmission. A textile structure was developed to accommodate all components. Platform performance was evaluated in terms of hardware, software and signal quality. The electrodes were characterised through electrochemical impedance spectroscopy and the operating system performance running an epileptic discrimination algorithm was evaluated. Signal quality was thoroughly assessed in two different approaches: playback of EEG reference signals and benchmarking with a clinical-grade EEG system in alpha-wave replacement and steady-state visual evoked potential paradigms. The proposed platform seems to efficiently monitor epileptic patients in both inpatient and outpatient settings and paves the way to new ambulatory clinical regimens as well as non-clinical EEG applications.
Semi-autonomous remote sensing time series generation tool
NASA Astrophysics Data System (ADS)
Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher
2017-10-01
High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.
Tool Wear Monitoring Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu
A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.
NASA Astrophysics Data System (ADS)
Crawford, S. M.; Crause, Lisa; Depagne, Éric; Ilkiewicz, Krystian; Schroeder, Anja; Kuhn, Rudolph; Hettlage, Christian; Romero Colmenaro, Encarni; Kniazev, Alexei; Väisänen, Petri
2016-08-01
The High Resolution Spectrograph (HRS) on the Southern African Large Telescope (SALT) is a dual beam, fiber-fed echelle spectrograph providing high resolution capabilities to the SALT observing community. We describe the available data reduction tools and the procedures put in place for regular monitoring of the data quality from the spectrograph. Data reductions are carried out through the pyhrs package. The data characteristics and instrument stability are reported as part of the SALT Dashboard to help monitor the performance of the instrument.
Measuring, managing and maximizing performance of mineral processing plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bascur, O.A.; Kennedy, J.P.
1995-12-31
The implementation of continuous quality improvement is the confluence of Total Quality Management, People Empowerment, Performance Indicators and Information Engineering. The supporting information technologies allow a mineral processor to narrow the gap between management business objectives and the process control level. One of the most important contributors is the user friendliness and flexibility of the personal computer in a client/server environment. This synergistic combination when used for real time performance monitoring translates into production cost savings, improved communications and enhanced decision support. Other savings come from reduced time to collect data and perform tedious calculations, act quickly with fresh newmore » data, generate and validate data to be used by others. This paper presents an integrated view of plant management. The selection of the proper tools for continuous quality improvement are described. The process of selecting critical performance monitoring indices for improved plant performance are discussed. The importance of a well balanced technological improvement, personnel empowerment, total quality management and organizational assets are stressed.« less
Computer assisted blast design and assessment tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, A.R.; Kleine, T.H.; Forsyth, W.W.
1995-12-31
In general the software required by a blast designer includes tools that graphically present blast designs (surface and underground), can analyze a design or predict its result, and can assess blasting results. As computers develop and computer literacy continues to rise the development of and use of such tools will spread. An example of the tools that are becoming available includes: Automatic blast pattern generation and underground ring design; blast design evaluation in terms of explosive distribution and detonation simulation; fragmentation prediction; blast vibration prediction and minimization; blast monitoring for assessment of dynamic performance; vibration measurement, display and signal processing;more » evaluation of blast results in terms of fragmentation; and risk and reliability based blast assessment. The authors have identified a set of criteria that are essential in choosing appropriate software blasting tools.« less
A combined scanning tunneling microscope-atomic layer deposition tool.
Mack, James F; Van Stockum, Philip B; Iwadate, Hitoshi; Prinz, Fritz B
2011-12-01
We have built a combined scanning tunneling microscope-atomic layer deposition (STM-ALD) tool that performs in situ imaging of deposition. It operates from room temperature up to 200 °C, and at pressures from 1 × 10(-6) Torr to 1 × 10(-2) Torr. The STM-ALD system has a complete passive vibration isolation system that counteracts both seismic and acoustic excitations. The instrument can be used as an observation tool to monitor the initial growth phases of ALD in situ, as well as a nanofabrication tool by applying an electric field with the tip to laterally pattern deposition. In this paper, we describe the design of the tool and demonstrate its capability for atomic resolution STM imaging, atomic layer deposition, and the combination of the two techniques for in situ characterization of deposition.
Zijlstra, Carolien; Lund, Ivar; Justesen, Annemarie F; Nicolaisen, Mogens; Jensen, Peter Kryger; Bianciotto, Valeria; Posta, Katalin; Balestrini, Raffaella; Przetakiewicz, Anna; Czembor, Elzbieta; van de Zande, Jan
2011-06-01
The possibility of combining novel monitoring techniques and precision spraying for crop protection in the future is discussed. A generic model for an innovative crop protection system has been used as a framework. This system will be able to monitor the entire cropping system and identify the presence of relevant pests, diseases and weeds online, and will be location specific. The system will offer prevention, monitoring, interpretation and action which will be performed in a continuous way. The monitoring is divided into several parts. Planting material, seeds and soil should be monitored for prevention purposes before the growing period to avoid, for example, the introduction of disease into the field and to ensure optimal growth conditions. Data from previous growing seasons, such as the location of weeds and previous diseases, should also be included. During the growing season, the crop will be monitored at a macroscale level until a location that needs special attention is identified. If relevant, this area will be monitored more intensively at a microscale level. A decision engine will analyse the data and offer advice on how to control the detected diseases, pests and weeds, using precision spray techniques or alternative measures. The goal is to provide tools that are able to produce high-quality products with the minimal use of conventional plant protection products. This review describes the technologies that can be used or that need further development in order to achieve this goal. Copyright © 2011 Society of Chemical Industry.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
2013-01-01
The monitoring of the cardiac output (CO) and other hemodynamic parameters, traditionally performed with the thermodilution method via a pulmonary artery catheter (PAC), is now increasingly done with the aid of less invasive and much easier to use devices. When used within the context of a hemodynamic optimization protocol, they can positively influence the outcome in both surgical and non-surgical patient populations. While these monitoring tools have simplified the hemodynamic calculations, they are subject to limitations and can lead to erroneous results if not used properly. In this article we will review the commercially available minimally invasive CO monitoring devices, explore their technical characteristics and describe the limitations that should be taken into consideration when clinical decisions are made. PMID:24472443
Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory
2017-12-23
Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory
Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less
Smartphone ECG aids real time diagnosis of palpitations in the competitive college athlete.
Peritz, David C; Howard, Austin; Ciocca, Mario; Chung, Eugene H
2015-01-01
Rapidly detecting dangerous arrhythmias in a symptomatic athlete continues to be an elusive goal. The use of handheld smartphone electrocardiogram (ECG) monitors could represent a helpful tool connecting the athletic trainer to the cardiologist. Six college athletes presented to their athletic trainers complaining of palpitations during exercise. A single lead ECG was performed using the AliveCor Heart Monitor and sent wirelessly to the Team Cardiologist who confirmed an absence of dangerous arrhythmia. AliveCor monitoring has the potential to enhance evaluation of symptomatic athletes by allowing trainers and team physicians to make diagnosis in real-time and facilitate faster return to play. Copyright © 2015 Elsevier Inc. All rights reserved.
Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A
2014-12-01
High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the apparent leaf motion was attributed to beam spot displacements between irradiations. The MLC QC test was performed 193 and 162 times over the monitoring period for the studied units and recalibration had to be repeated up to three times on one of these units. For both units, rate of MLC interlocks was moderately associated with MLC servicing events. The strongest association with the MLC performance was observed between the MLC servicing events and the total number of out-of-control leaves. The average elapsed time for which the number of out-of-specification or out-of-control leaves was within a given performance threshold was computed and used to assess adequacy of MLC test frequency. A MLC performance monitoring system has been developed and implemented to acquire high-quality QC data at high frequency. This is enabled by the relatively short acquisition time for the images and automatic image analysis. The monitoring system was also used to record and track the rate of MLC-related interlocks and servicing events. MLC performances for two commercially available MLC models have been assessed and the results support monthly test frequency for widely accepted ± 1 mm specifications. Higher QC test frequency is however required to maintain tighter specification and in-control behavior.
Bishai, David; Sherry, Melissa; Pereira, Claudia C; Chicumbe, Sergio; Mbofana, Francisco; Boore, Amy; Smith, Monica; Nhambi, Leonel; Borse, Nagesh N
2016-01-01
This study describes the development of a self-audit tool for public health and the associated methodology for implementing a district health system self-audit tool that can provide quantitative data on how district governments perceive their performance of the essential public health functions. Development began with a consensus-building process to engage Ministry of Health and provincial health officers in Mozambique and Botswana. We then worked with lists of relevant public health functions as determined by these stakeholders to adapt a self-audit tool describing essential public health functions to each country's health system. We then piloted the tool across districts in both countries and conducted interviews with district health personnel to determine health workers' perception of the usefulness of the approach. Country stakeholders were able to develop consensus around 11 essential public health functions that were relevant in each country. Pilots of the self-audit tool enabled the tool to be effectively shortened. Pilots also disclosed a tendency to upcode during self-audits that was checked by group deliberation. Convening sessions at the district enabled better attendance and representative deliberation. Instant feedback from the audit was a feature that 100% of pilot respondents found most useful. The development of metrics that provide feedback on public health performance can be used as an aid in the self-assessment of health system performance at the district level. Measurements of practice can open the door to future applications for practice improvement and research into the determinants and consequences of better public health practice. The current tool can be assessed for its usefulness to district health managers in improving their public health practice. The tool can also be used by the Ministry of Health or external donors in the African region for monitoring the district-level performance of the essential public health functions.
Bishai, David; Sherry, Melissa; Pereira, Claudia C.; Chicumbe, Sergio; Mbofana, Francisco; Boore, Amy; Smith, Monica; Nhambi, Leonel; Borse, Nagesh N.
2018-01-01
Introduction This study describes the development of a self-audit tool for public health and the associated methodology for implementing a district health system self-audit tool that can provide quantitative data on how district governments perceive their own performance of the essential public health functions. Methods Development began with a consensus building process to engage Ministry of Health and provincial health officers in Mozambique and Botswana. We then worked with lists of relevant public health functions as determined by these stakeholders to adapt a self-audit tool describing essential public health functions to each country’s health system. We then piloted the tool across districts in both countries and conducted interviews with district health personnel to determine health workers’ perception of the usefulness of the approach. Results Country stakeholders were able to develop consensus around eleven essential public health functions that were relevant in each country. Pilots of the self-audit tool enabled the tool to be effectively shortened. Pilots also disclosed a tendency to up code during self-audits that was checked by group deliberation. Convening sessions at the district enabled better attendance and representative deliberation. Instant feedback from the audit was a feature that 100% of pilot respondents found most useful. Conclusions The development of metrics that provide feedback on public health performance can be used as an aid in the self-assessment of health system performance at the district level. Measurements of practice can open the door to future applications for practice improvement and research into the determinants and consequences of better public health practice. The current tool can be assessed for its usefulness to district health managers in improving their public health practice. The tool can also be used by ministry of health or external donors in the African region for monitoring the district level performance of the essential public health functions. PMID:27682727
Balikuddembe, Michael S; Wakholi, Peter K; Tumwesigye, Nazarius M; Tylleskär, Thorkild
2018-01-01
A third of women in childbirth are inadequately monitored, partly due to the tools used. Some stakeholders assert that the current labour monitoring tools are not efficient and need improvement to become more relevant to childbirth attendants. The study objective was to explore the expectations of maternity service providers for a mobile childbirth monitoring tool in maternity facilities in a low-income country like Uganda. Semi-structured interviews of purposively selected midwives and doctors in rural-urban childbirth facilities in Uganda were conducted before thematic data analysis. The childbirth providers expected a tool that enabled fast and secure childbirth record storage and sharing. They desired a tool that would automatically and conveniently register patient clinical findings, and actively provide interactive clinical decision support on a busy ward. The tool ought to support agreed upon standards for good pregnancy outcomes but also adaptable to the patient and their difficult working conditions. The tool functionality should include clinical data management and real-time decision support to the midwives, while the non-functional attributes include versatility and security.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.; McNab, W. W.
2007-12-01
Long-term monitoring (LTM) is particularly important for contaminants which are mitigated by natural processes of dilution, dispersion, and degradation. At many sites, LTM can require decades of expensive sampling at tens or even hundreds of existing monitoring wells, resulting in hundreds of thousands, or millions of dollars per year for sampling and data management. Therefore, contaminant sampling tools, methods and frequencies are chosen to minimize waste and data management costs while ensuring a reliable and informative time-history of contaminant measurement for regulatory compliance. The interplay play between cause (i.e. subsurface heterogeneities, sampling techniques, measurement frequencies) and effect (unreliable data and measurements gap) has been overlooked in many field applications which can lead to inconsistencies in time- histories of contaminant samples. In this study we address the relationship between cause and effect for different hydrogeological sampling settings: porous and fractured media. A numerical model has been developed using AMR-FEM to solve the physicochemical processes that take place in the aquifer and the monitoring well. In the latter, the flow is governed by the Navier-Stokes equations while in the former the flow is governed by the diffusivity equation; both are fully coupled to mimic stressed conditions and to assess the effect of dynamic sampling tool on the formation surrounding the monitoring well. First of all, different sampling tools (i.e., Easy Pump, Snapper Grab Sampler) were simulated in a monitoring well screened in different homogeneous layered aquifers to assess their effect on the sampling measurements. Secondly, in order to make the computer runs more CPU efficient the flow in the monitoring well was replaced by its counterpart flow in porous media with infinite permeability and the new model was used to simulate the effect of heterogeneities, sampling depth, sampling tool and sampling frequencies on the uncertainties in the concentration measurements. Finally, the models and results were abstracted using a simple mixed-tank approach to further simplify the models and make them more accessible to field hydrogeologists. During the abstraction process a novel method was developed for mapping streamlines in the fractures as well within the monitoring well to illustrate mixing and mixing zones. Applications will be demonstrated for both sampling in porous and fractured media. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
Design and modelling of a link monitoring mechanism for the Common Data Link (CDL)
NASA Astrophysics Data System (ADS)
Eichelberger, John W., III
1994-09-01
The Common Data Link (CDL) is a full duplex, point-to-point microwave communications system used in imagery and signals intelligence collection systems. It provides a link between two remote Local Area Networks (LAN's) aboard collection and surface platforms. In a hostile environment, there is an overwhelming need to dynamically monitor the link and thus, limit the impact of jamming. This work describes steps taken to design, model, and evaluate a link monitoring system suitable for the CDL. The monitoring system is based on features and monitoring constructs of the Link Control Protocol (LCP) in the Point-to-Point Protocol (PPP) suite. The CDL model is based on a system of two remote Fiber Distributed Data Interface (FDDI) LAN's. In particular, the policies and mechanisms associated with monitoring are described in detail. An implementation of the required mechanisms using the OPNET network engineering tool is described. Performance data related to monitoring parameters is reported. Finally, integration of the FDDI-CDL model with the OPNET Internet model is described.
Performance Analysis of and Tool Support for Transactional Memory on BG/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schindewolf, M
2011-12-08
Martin Schindewolf worked during his internship at the Lawrence Livermore National Laboratory (LLNL) under the guidance of Martin Schulz at the Computer Science Group of the Center for Applied Scientific Computing. We studied the performance of the TM subsystem of BG/Q as well as researched the possibilities for tool support for TM. To study the performance, we run CLOMP-TM. CLOMP-TM is a benchmark designed for the purpose to quantify the overhead of OpenMP and compare different synchronization primitives. To advance CLOMP-TM, we added Message Passing Interface (MPI) routines for a hybrid parallelization. This enables to run multiple MPI tasks, eachmore » running OpenMP, on one node. With these enhancements, a beneficial MPI task to OpenMP thread ratio is determined. Further, the synchronization primitives are ranked as a function of the application characteristics. To demonstrate the usefulness of these results, we investigate a real Monte Carlo simulation called Monte Carlo Benchmark (MCB). Applying the lessons learned yields the best task to thread ratio. Further, we were able to tune the synchronization by transactifying the MCB. Further, we develop tools that capture the performance of the TM run time system and present it to the application's developer. The performance of the TM run time system relies on the built-in statistics. These tools use the Blue Gene Performance Monitoring (BGPM) interface to correlate the statistics from the TM run time system with performance counter values. This combination provides detailed insights in the run time behavior of the application and enables to track down the cause of degraded performance. Further, one tool has been implemented that separates the performance counters in three categories: Successful Speculation, Unsuccessful Speculation and No Speculation. All of the tools are crafted around IBM's xlc compiler for C and C++ and have been run and tested on a Q32 early access system.« less
NASA Astrophysics Data System (ADS)
Audigier, Chloé; Kim, Younsu; Dillow, Austin; Boctor, Emad M.
2017-03-01
Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).
Inclusion Detection in Aluminum Alloys Via Laser-Induced Breakdown Spectroscopy
NASA Astrophysics Data System (ADS)
Hudson, Shaymus W.; Craparo, Joseph; De Saro, Robert; Apelian, Diran
2018-04-01
Laser-induced breakdown spectroscopy (LIBS) has shown promise as a technique to quickly determine molten metal chemistry in real time. Because of its characteristics, LIBS could also be used as a technique to sense for unwanted inclusions and impurities. Simulated Al2O3 inclusions were added to molten aluminum via a metal-matrix composite. LIBS was performed in situ to determine whether particles could be detected. Outlier analysis on oxygen signal was performed on LIBS data and compared to oxide volume fraction measured through metallography. It was determined that LIBS could differentiate between melts with different amounts of inclusions by monitoring the fluctuations in signal for elements of interest. LIBS shows promise as an enabling tool for monitoring metal cleanliness.
Analysis For Monitoring the Earth Science Afternoon Constellation
NASA Technical Reports Server (NTRS)
Demarest, Peter; Richon, Karen V.; Wright, Frank
2005-01-01
The Earth Science Afternoon Constellation consists of Aqua, Aura, PARASOL, CALIPSO, Cloudsat, and the Orbiting Carbon Observatory (OCO). The coordination of flight dynamics activities between these missions is critical to the safety and success of the Afternoon Constellation. This coordination is based on two main concepts, the control box and the zone-of-exclusion. This paper describes how these two concepts are implemented in the Constellation Coordination System (CCS). The CCS is a collection of tools that enables the collection and distribution of flight dynamics products among the missions, allows cross-mission analyses to be performed through a web-based interface, performs automated analyses to monitor the overall constellation, and notifies the missions of changes in the status of the other missions.
An efficient framework for Java data processing systems in HPC environments
NASA Astrophysics Data System (ADS)
Fries, Aidan; Castañeda, Javier; Isasi, Yago; Taboada, Guillermo L.; Portell de Mora, Jordi; Sirvent, Raül
2011-11-01
Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill this gap. It includes a set of efficient data communication functions based on message-passing, thus providing, when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum supercomputer (Barcelona Supercomputing Center).
Monitoring Training Load and Fatigue in Rugby Sevens Players
Elloumi, Mohamed; Makni, Emna; Moalla, Wassim; Bouaziz, Taieb; Tabka, Zouhair; Lac, Gérard; Chamari, Karim
2012-01-01
Purpose Trainers and physical fitness coaches’ need a useful tool to assess training loads to avoid overtraining. However, perceived scales or questionnaires were required. Therefore, the purpose of this study was to assess whether a short 8-item questionnaire of fatigue could be a useful tool for monitoring changes in perceived training load and strain among elite rugby Sevens (7s) players during preparation for a major competition. Methods Sixteen elite rugby 7s players completed an 8-week training program composed of 6-week intense training (IT) and 2-week reduced training (RT). They were tested before (T0), after the IT (T1) and after the RT (T2). The quantification of the perceived training load and strain were performed by the session-RPE (rating of perceived exertion) method and concomitantly the 8-item questionnaire of fatigue was administered. Results Training load (TL) and strain (TS) and total score of fatigue (TSF from the 8-item questionnaire) increased during IT and decreased during RT. Simultaneously, physical performances decreased during IT and were improved after LT. The changes in TL, TS and TSF correlated significantly over the training period (r=0.63-0.83). Conclusions These findings suggest that the short questionnaire of fatigue could be a practical and a sensitive tool for monitoring changes in training load and strain in team-sport athletes. Accordingly, the simultaneous use of the short questionnaire of fatigue along with the session-RPE method for perceived changes in training load and strain during training could provide additional information on the athletes’ status, allowing coaches to prevent eventual states of overreaching or overtraining. PMID:23012637
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seimenis, Ioannis; Tsekos, Nikolaos V.; Keroglou, Christoforos
2012-04-15
Purpose: The aim of this work was to develop and test a general methodology for the planning and performance of robot-assisted, MR-guided interventions. This methodology also includes the employment of software tools with appropriately tailored routines to effectively exploit the capabilities of MRI and address the relevant spatial limitations. Methods: The described methodology consists of: (1) patient-customized feasibility study that focuses on the geometric limitations imposed by the gantry, the robotic hardware, and interventional tools, as well as the patient; (2) stereotactic preoperative planning for initial positioning of the manipulator and alignment of its end-effector with a selected target; andmore » (3) real-time, intraoperative tool tracking and monitoring of the actual intervention execution. Testing was performed inside a standard 1.5T MRI scanner in which the MR-compatible manipulator is deployed to provide the required access. Results: A volunteer imaging study demonstrates the application of the feasibility stage. A phantom study on needle targeting is also presented, demonstrating the applicability and effectiveness of the proposed preoperative and intraoperative stages of the methodology. For this purpose, a manually actuated, MR-compatible robotic manipulation system was used to accurately acquire a prescribed target through alternative approaching paths. Conclusions: The methodology presented and experimentally examined allows the effective performance of MR-guided interventions. It is suitable for, but not restricted to, needle-targeting applications assisted by a robotic manipulation system, which can be deployed inside a cylindrical scanner to provide the required access to the patient facilitating real-time guidance and monitoring.« less
Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Rajeeva; Kumar, Aditya; Dai, Dan
2012-12-31
This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less
Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project
Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.
2011-01-01
Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Equeter, Lucas; Ducobu, François; Rivière-Lorphèvre, Edouard; Abouridouane, Mustapha; Klocke, Fritz; Dehombreux, Pierre
2018-05-01
Industrial concerns arise regarding the significant cost of cutting tools in machining process. In particular, their improper replacement policy can lead either to scraps, or to early tool replacements, which would waste fine tools. ISO 3685 provides the flank wear end-of-life criterion. Flank wear is also the nominal type of wear for longest tool lifetimes in optimal cutting conditions. Its consequences include bad surface roughness and dimensional discrepancies. In order to aid the replacement decision process, several tool condition monitoring techniques are suggested. Force signals were shown in the literature to be strongly linked with tools flank wear. It can therefore be assumed that force signals are highly relevant for monitoring the condition of cutting tools and providing decision-aid information in the framework of their maintenance and replacement. The objective of this work is to correlate tools flank wear with numerically computed force signals. The present work uses a Finite Element Model with a Coupled Eulerian-Lagrangian approach. The geometry of the tool is changed for different runs of the model, in order to obtain results that are specific to a certain level of wear. The model is assessed by comparison with experimental data gathered earlier on fresh tools. Using the model at constant cutting parameters, force signals under different tool wear states are computed and provide force signals for each studied tool geometry. These signals are qualitatively compared with relevant data from the literature. At this point, no quantitative comparison could be performed on worn tools because the reviewed literature failed to provide similar studies in this material, either numerical or experimental. Therefore, further development of this work should include experimental campaigns aiming at collecting cutting forces signals and assessing the numerical results that were achieved through this work.
EPMOSt: An Energy-Efficient Passive Monitoring System for Wireless Sensor Networks
Garcia, Fernando P.; Andrade, Rossana M. C.; Oliveira, Carina T.; de Souza, José Neuman
2014-01-01
Monitoring systems are important for debugging and analyzing Wireless Sensor Networks (WSN). In passive monitoring, a monitoring network needs to be deployed in addition to the network to be monitored, named the target network. The monitoring network captures and analyzes packets transmitted by the target network. An energy-efficient passive monitoring system is necessary when we need to monitor a WSN in a real scenario because the lifetime of the monitoring network is extended and, consequently, the target network benefits from the monitoring for a longer time. In this work, we have identified, analyzed and compared the main passive monitoring systems proposed for WSN. During our research, we did not identify any passive monitoring system for WSN that aims to reduce the energy consumption of the monitoring network. Therefore, we propose an Energy-efficient Passive MOnitoring SysTem for WSN named EPMOSt that provides monitoring information using a Simple Network Management Protocol (SNMP) agent. Thus, any management tool that supports the SNMP protocol can be integrated with this monitoring system. Experiments with real sensors were performed in several scenarios. The results obtained show the energy efficiency of the proposed monitoring system and the viability of using it to monitor WSN in real scenarios. PMID:24949639
Monitoring Error Rates In Illumina Sequencing.
Manley, Leigh J; Ma, Duanduan; Levine, Stuart S
2016-12-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitanidis, Peter
As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less
Pros and cons of body mass index as a nutritional and risk assessment tool in dialysis patients.
Carrero, Juan Jesús; Avesani, Carla Maria
2015-01-01
Obesity is a problem of serious concern among chronic kidney disease (CKD) patients; it is a risk factor for progression to end-stage renal disease and its incidence and prevalence in dialysis patients exceeds those of the general population. Obesity, typically assessed with the simple metric of body mass index (BMI), is considered a mainstay for nutritional assessment in guidelines on nutrition in CKD. While regular BMI assessment in connection with the dialysis session is a simple and easy-to-use monitoring tool, such ease of access can lead to excess-of-use, as the value of this metric to health care professionals is overestimated. This review examines BMI as a clinical monitoring tool in CKD practice and offers a critical appraisal as to what a high or a low BMI may signify in this patient population. Topics discussed include the utility of BMI as a reflection of body size, body composition and body fat distribution, diagnostic versus prognostic performance, and consideration of temporal trends over single assessments. © 2014 Wiley Periodicals, Inc.
Advanced data management for optimising the operation of a full-scale WWTP.
Beltrán, Sergio; Maiza, Mikel; de la Sota, Alejandro; Villanueva, José María; Ayesa, Eduardo
2012-01-01
The lack of appropriate data management tools is presently a limiting factor for a broader implementation and a more efficient use of sensors and analysers, monitoring systems and process controllers in wastewater treatment plants (WWTPs). This paper presents a technical solution for advanced data management of a full-scale WWTP. The solution is based on an efficient and intelligent use of the plant data by a standard centralisation of the heterogeneous data acquired from different sources, effective data processing to extract adequate information, and a straightforward connection to other emerging tools focused on the operational optimisation of the plant such as advanced monitoring and control or dynamic simulators. A pilot study of the advanced data manager tool was designed and implemented in the Galindo-Bilbao WWTP. The results of the pilot study showed its potential for agile and intelligent plant data management by generating new enriched information combining data from different plant sources, facilitating the connection of operational support systems, and developing automatic plots and trends of simulated results and actual data for plant performance and diagnosis.
Fiber optic video monitoring system for remote CT/MR scanners clinically accepted
NASA Astrophysics Data System (ADS)
Tecotzky, Raymond H.; Bazzill, Todd M.; Eldredge, Sandra L.; Tagawa, James; Sayre, James W.
1992-07-01
With the proliferation of CT travel to distant scanners to review images before their patients can be released. We designed a fiber-optic broadband video system to transmit images from seven scanner consoles to fourteen remote monitoring stations in real time. This system has been used clinically by radiologists for over one years. We designed and conducted a user survey to categorize the levels of system use by section (Chest, GI, GU, Bone, Neuro, Peds, etc.), to measure operational utilization and acceptance of the system into the clinical environment, to clarify the system''s importance as a clinical tool for saving radiologists travel-time to distant CT the system''s performance and limitations as a diagnostic tool. The study was administered directly to radiologists using a printed survey form. The results of the survey''s compiled data show a high percentage of system usage by a wide spectrum of radiologists. Clearly, this system has been accepted into the clinical environment as a highly valued diagnostic tool in terms of time savings and functional flexibility.
Process auditing and performance improvement in a mixed wastewater-aqueous waste treatment plant.
Collivignarelli, Maria Cristina; Bertanza, Giorgio; Abbà, Alessandro; Damiani, Silvestro
2018-02-01
The wastewater treatment process is based on complex chemical, physical and biological mechanisms that are closely interconnected. The efficiency of the system (which depends on compliance with national regulations on wastewater quality) can be achieved through the use of tools such as monitoring, that is the detection of parameters that allow the continuous interpretation of the current situation, and experimental tests, which allow the measurement of real performance (of a sector, a single treatment or equipment) and comparison with the following ones. Experimental tests have a particular relevance in the case of municipal wastewater treatment plants fed with a strong industrial component and especially in the case of plants authorized to treat aqueous waste. In this paper a case study is presented where the application of management tools such as careful monitoring and experimental tests led to the technical and economic optimization of the plant: the main results obtained were the reduction of sludge production (from 4,000 t/year w.w. (wet weight) to about 2,200 t/year w.w.) and operating costs (e.g. from 600,000 €/year down to about 350,000 €/year for reagents), the increase of resource recovery and the improvement of the overall process performance.
Kino-Oka, Masahiro; Ogawa, Natsuki; Umegaki, Ryota; Taya, Masahito
2005-01-01
A novel bioreactor system was designed to perform a series of batchwise cultures of anchorage-dependent cells by means of automated operations of medium change and passage for cell transfer. The experimental data on contamination frequency ensured the biological cleanliness in the bioreactor system, which facilitated the operations in a closed environment, as compared with that in flask culture system with manual handlings. In addition, the tools for growth prediction (based on growth kinetics) and real-time growth monitoring by measurement of medium components (based on small-volume analyzing machinery) were installed into the bioreactor system to schedule the operations of medium change and passage and to confirm that culture proceeds as scheduled, respectively. The successive culture of anchorage-dependent cells was conducted with the bioreactor running in an automated way. The automated bioreactor gave a successful culture performance with fair accordance to preset scheduling based on the information in the latest subculture, realizing 79- fold cell expansion for 169 h. In addition, the correlation factor between experimental data and scheduled values through the bioreactor performance was 0.998. It was concluded that the proposed bioreactor with the integration of the prediction and monitoring tools could offer a feasible system for the manufacturing process of cultured tissue products.
1999-10-01
TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Surface Warfare Center CD Code 2230-Design Integration...Tools Bldg 192, Room 128 9500 MacArthur Blvd Bethesda, MD 20817-5700 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...Implementation of Task 2.4 • Task 7.0 Conduct Workshops • Task 8.0 Final Report To ensure success with the project, the research needed to be performed at the
ERIC Educational Resources Information Center
Lee, Victor R.; DuMont, Maneksha
2010-01-01
There is a great potential opportunity to use portable physical activity monitoring devices as data collection tools for educational purposes. Using one such device, we designed and implemented a weeklong workshop with high school students to test the utility of such technology. During that intervention, students performed data investigations of…
Commands to Monitor and Control Jobs on Peregrine | High-Performance
also be used with flags to return more or less information. For example showq -u
[Recommendations for the evaluation and follow-up of the continuous quality improvement].
Maurellet-Evrard, S; Daunizeau, A
2013-06-01
Continual improvement of the quality in a medical laboratory is based on the implementation of tools for systematically evaluate the quality management system and its ability to meet the objectives defined. Monitoring through audit and management review, addressing complaints and nonconformities and performing client satisfaction survey are the key for the continual improvement.
Mathematical modeling of the impedance of single and multi-tube AMTEC units
NASA Technical Reports Server (NTRS)
Shields, V. B.; Williams, R. M.; Ryan, M. A.; Cortez, R.; Homer, M. L.; Kisor, A. K.; Manatt, K.
2001-01-01
AMTEC power systems are designed for use on extended space missions. During the lifetime of such missions the power available for the spacecraft will depend on the degradation of the system performance. Development of a tool that allows monitoring of the system degradation will provide an aid in dtermining the condition of the power source.
Data Dashboards for School Directors: Using Data for Accountability and Student Achievement
ERIC Educational Resources Information Center
Washington State School Directors' Association (NJ1), 2008
2008-01-01
This guide is designed to inform school directors about the value of a data dashboard and to provide information on how districts can create a data dashboard for school directors. A data dashboard is a tool for viewing and analyzing student achievement and performance data. Key data for monitoring student achievement and directing policy level…
PACS administrators' and radiologists' perspective on the importance of features for PACS selection.
Joshi, Vivek; Narra, Vamsi R; Joshi, Kailash; Lee, Kyootai; Melson, David
2014-08-01
Picture archiving and communication systems (PACS) play a critical role in radiology. This paper presents the criteria important to PACS administrators for selecting a PACS. A set of criteria are identified and organized into an integrative hierarchical framework. Survey responses from 48 administrators are used to identify the relative weights of these criteria through an analytical hierarchy process. The five main dimensions for PACS selection in order of importance are system continuity and functionality, system performance and architecture, user interface for workflow management, user interface for image manipulation, and display quality. Among the subdimensions, the highest weights were assessed for security, backup, and continuity; tools for continuous performance monitoring; support for multispecialty images; and voice recognition/transcription. PACS administrators' preferences were generally in line with that of previously reported results for radiologists. Both groups assigned the highest priority to ensuring business continuity and preventing loss of data through features such as security, backup, downtime prevention, and tools for continuous PACS performance monitoring. PACS administrators' next high priorities were support for multispecialty images, image retrieval speeds from short-term and long-term storage, real-time monitoring, and architectural issues of compatibility and integration with other products. Thus, next to ensuring business continuity, administrators' focus was on issues that impact their ability to deliver services and support. On the other hand, radiologists gave high priorities to voice recognition, transcription, and reporting; structured reporting; and convenience and responsiveness in manipulation of images. Thus, radiologists' focus appears to be on issues that may impact their productivity, effort, and accuracy.
Impedance microflow cytometry for viability studies of microorganisms
NASA Astrophysics Data System (ADS)
Di Berardino, Marco; Hebeisen, Monika; Hessler, Thomas; Ziswiler, Adrian; Largiadèr, Stephanie; Schade, Grit
2011-02-01
Impedance-based Coulter counters and its derivatives are widely used cell analysis tools in many laboratories and use normally DC or low frequency AC to perform these electrical analyses. The emergence of micro-fabrication technologies in the last decade, however, provides a new means of measuring electrical properties of cells. Microfluidic approaches combined with impedance spectroscopy measurements in the radio frequency (RF) range increase sensitivity and information content and thus push single cell analyses beyond simple cell counting and sizing applications towards multiparametric cell characterization. Promising results have been shown already in the fields of cell differentiation and blood analysis. Here we emphasize the potential of this technology by presenting new data obtained from viability studies on microorganisms. Impedance measurements of several yeast and bacteria strains performed at frequencies around 10 MHz enable an easy discrimination between dead and viable cells. Moreover, cytotoxic effects of antibiotics and other reagents, as well as cell starvation can also be monitored easily. Control analyses performed with conventional flow cytometers using various fluorescent dyes (propidium iodide, oxonol) indicate a good correlation and further highlight the capability of this device. The label-free approach makes on the one hand the use of usually expensive fluorochromes obsolete, on the other hand practically eliminates laborious sample preparation procedures. Until now, online cell monitoring was limited to the determination of viable biomass, which provides rather poor information of a cell culture. Impedance microflow cytometry, besides other aspects, proposes a simple solution to these limitations and might become an important tool for bioprocess monitoring applications in the biotech industry.
NASA Astrophysics Data System (ADS)
Lanorte, Antonio; Desantis, Fortunato; Aromando, Angelo; Lasaponara, Rosa
2013-04-01
This paper presents the results we obtained in the context of the FIRE-SAT project during the 2012 operative application of the satellite based tools for fire monitoring. FIRE_SAT project has been funded by the Civil Protection of the Basilicata Region in order to set up a low cost methodology for fire danger monitoring and fire effect estimation based on satellite Earth Observation techniques. To this aim, NASA Moderate Resolution Imaging Spectroradiometer (MODIS), ASTER, Landsat TM data were used. Novel data processing techniques have been developed by researchers of the ARGON Laboratory of the CNR-IMAA for the operative monitoring of fire. In this paper we only focus on the danger estimation model which has been fruitfully used since 2008 to 2012 as an reliable operative tool to support and optimize fire fighting strategies from the alert to the management of resources including fire attacks. The daily updating of fire danger is carried out using satellite MODIS images selected for their spectral capability and availability free of charge from NASA web site. This makes these data sets very suitable for an effective systematic (daily) and sustainable low-cost monitoring of large areas. The preoperative use of the integrated model, pointed out that the system properly monitor spatial and temporal variations of fire susceptibility and provide useful information of both fire severity and post fire regeneration capability.
Griffiths, Alex; Beaussier, Anne-Laure; Demeritt, David; Rothstein, Henry
2017-02-01
The Care Quality Commission (CQC) is responsible for ensuring the quality of the health and social care delivered by more than 30 000 registered providers in England. With only limited resources for conducting on-site inspections, the CQC has used statistical surveillance tools to help it identify which providers it should prioritise for inspection. In the face of planned funding cuts, the CQC plans to put more reliance on statistical surveillance tools to assess risks to quality and prioritise inspections accordingly. To evaluate the ability of the CQC's latest surveillance tool, Intelligent Monitoring (IM), to predict the quality of care provided by National Health Service (NHS) hospital trusts so that those at greatest risk of providing poor-quality care can be identified and targeted for inspection. The predictive ability of the IM tool is evaluated through regression analyses and χ 2 testing of the relationship between the quantitative risk score generated by the IM tool and the subsequent quality rating awarded following detailed on-site inspection by large expert teams of inspectors. First, the continuous risk scores generated by the CQC's IM statistical surveillance tool cannot predict inspection-based quality ratings of NHS hospital trusts (OR 0.38 (0.14 to 1.05) for Outstanding/Good, OR 0.94 (0.80 to -1.10) for Good/Requires improvement, and OR 0.90 (0.76 to 1.07) for Requires improvement/Inadequate). Second, the risk scores cannot be used more simply to distinguish the trusts performing poorly-those subsequently rated either 'Requires improvement' or 'Inadequate'-from the trusts performing well-those subsequently rated either 'Good' or 'Outstanding' (OR 1.07 (0.91 to 1.26)). Classifying CQC's risk bandings 1-3 as high risk and 4-6 as low risk, 11 of the high risk trusts were performing well and 43 of the low risk trusts were performing poorly, resulting in an overall accuracy rate of 47.6%. Third, the risk scores cannot be used even more simply to distinguish the worst performing trusts-those subsequently rated 'Inadequate'-from the remaining, better performing trusts (OR 1.11 (0.94 to 1.32)). Classifying CQC's risk banding 1 as high risk and 2-6 as low risk, the highest overall accuracy rate of 72.8% was achieved, but still only 6 of the 13 Inadequate trusts were correctly classified as being high risk. Since the IM statistical surveillance tool cannot predict the outcome of NHS hospital trust inspections, it cannot be used for prioritisation. A new approach to inspection planning is therefore required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
STS-1 mission contamination evaluation approach
NASA Technical Reports Server (NTRS)
Jacobs, S.; Ehlers, H.; Miller, E. R.
1980-01-01
The space transportation system 1 mission will be the first opportunity to assess the induced environment of the orbiter payload bay region. Two tools were developed to aid in this assessment. The shuttle payload contamination evaluation computer program was developed to provide an analytical tool for prediction of the induced molecular contamination environment of the space shuttle orbiter during its onorbit operations. An induced environment contamination monitor was constructed and tested to measure the space shuttle orbiter contamination environment inside the payload bay during ascent and descent and inside and outside the payload bay during the onorbit phase. Measurements are to be performed during the four orbital flight test series. Measurements planned for the first flight are described and predicted environmental data are discussed. The results indicate that the expected data are within the measurement range of the induced environment contamination monitor instruments evaluated, and therefore it is expected that useful contamination environmental data will be available after the first flight.
Network Monitor and Control of Disruption-Tolerant Networks
NASA Technical Reports Server (NTRS)
Torgerson, J. Leigh
2014-01-01
For nearly a decade, NASA and many researchers in the international community have been developing Internet-like protocols that allow for automated network operations in networks where the individual links between nodes are only sporadically connected. A family of Disruption-Tolerant Networking (DTN) protocols has been developed, and many are reaching CCSDS Blue Book status. A NASA version of DTN known as the Interplanetary Overlay Network (ION) has been flight-tested on the EPOXI spacecraft and ION is currently being tested on the International Space Station. Experience has shown that in order for a DTN service-provider to set up a large scale multi-node network, a number of network monitor and control technologies need to be fielded as well as the basic DTN protocols. The NASA DTN program is developing a standardized means of querying a DTN node to ascertain its operational status, known as the DTN Management Protocol (DTNMP), and the program has developed some prototypes of DTNMP software. While DTNMP is a necessary component, it is not sufficient to accomplish Network Monitor and Control of a DTN network. JPL is developing a suite of tools that provide for network visualization, performance monitoring and ION node control software. This suite of network monitor and control tools complements the GSFC and APL-developed DTN MP software, and the combined package can form the basis for flight operations using DTN.
WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.
Humfredo Marcano-Vega; Andrew Lister; Kevin Megown; Charles Scott
2016-01-01
There is a growing need within the insular Caribbean for technical assistance in planning forest-monitoring projects and data analysis. This paper gives an overview of software tools developed by the USDA Forest Serviceâs National Inventory and Monitoring Applications Center and the Remote Sensing Applications Center. We discuss their applicability in the efficient...
A Job Monitoring and Accounting Tool for the LSF Batch System
NASA Astrophysics Data System (ADS)
Sarkar, Subir; Taneja, Sonia
2011-12-01
This paper presents a web based job monitoring and group-and-user accounting tool for the LSF Batch System. The user oriented job monitoring displays a simple and compact quasi real-time overview of the batch farm for both local and Grid jobs. For Grid jobs the Distinguished Name (DN) of the Grid users is shown. The overview monitor provides the most up-to-date status of a batch farm at any time. The accounting tool works with the LSF accounting log files. The accounting information is shown for a few pre-defined time periods by default. However, one can also compute the same information for any arbitrary time window. The tool already proved to be an extremely useful means to validate more extensive accounting tools available in the Grid world. Several sites have already been using the present tool and more sites running the LSF batch system have shown interest. We shall discuss the various aspects that make the tool essential for site administrators and end-users alike and outline the current status of development as well as future plans.
Kim, Jung Hyup; Rothrock, Ling; Laberge, Jason
2014-05-01
This paper provides a case study of Signal Detection Theory (SDT) as applied to a continuous monitoring dual-task environment. Specifically, SDT was used to evaluate the independent contributions of sensitivity and bias to different qualitative gauges used in process control. To assess detection performance in monitoring the gauges, we developed a Time Window-based Human-In-The-Loop (TWHITL) simulation bed. Through this test bed, we were able to generate a display similar to those monitored by console operators in oil and gas refinery plants. By using SDT and TWHITL, we evaluated the sensitivity, operator bias, and response time of flow, level, pressure, and temperature gauge shapes developed by Abnormal Situation Management(®) (ASM(®)) Consortium (www.asmconsortium.org). Our findings suggest that display density influences the effectiveness of participants in detecting abnormal shapes. Furthermore, results suggest that some shapes elicit better detection performance than others. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Audio signal analysis for tool wear monitoring in sheet metal stamping
NASA Astrophysics Data System (ADS)
Ubhayaratne, Indivarie; Pereira, Michael P.; Xiang, Yong; Rolfe, Bernard F.
2017-02-01
Stamping tool wear can significantly degrade product quality, and hence, online tool condition monitoring is a timely need in many manufacturing industries. Even though a large amount of research has been conducted employing different sensor signals, there is still an unmet demand for a low-cost easy to set up condition monitoring system. Audio signal analysis is a simple method that has the potential to meet this demand, but has not been previously used for stamping process monitoring. Hence, this paper studies the existence and the significance of the correlation between emitted sound signals and the wear state of sheet metal stamping tools. The corrupting sources generated by the tooling of the stamping press and surrounding machinery have higher amplitudes compared to that of the sound emitted by the stamping operation itself. Therefore, a newly developed semi-blind signal extraction technique was employed as a pre-processing technique to mitigate the contribution of these corrupting sources. The spectral analysis results of the raw and extracted signals demonstrate a significant qualitative relationship between wear progression and the emitted sound signature. This study lays the basis for employing low-cost audio signal analysis in the development of a real-time industrial tool condition monitoring system.
NASA Technical Reports Server (NTRS)
Thomas, Stan J.
1993-01-01
KATE (Knowledge-based Autonomous Test Engineer) is a model-based software system developed in the Artificial Intelligence Laboratory at the Kennedy Space Center for monitoring, fault detection, and control of launch vehicles and ground support systems. In order to bring KATE to the level of performance, functionality, and integratability needed for firing room applications, efforts are underway to implement KATE in the C++ programming language using an X-windows interface. Two programs which were designed and added to the collection of tools which comprise the KATE toolbox are described. The first tool, called the schematic viewer, gives the KATE user the capability to view digitized schematic drawings in the KATE environment. The second tool, called the model editor, gives the KATE model builder a tool for creating and editing knowledge base files. Design and implementation issues having to do with these two tools are discussed. It will be useful to anyone maintaining or extending either the schematic viewer or the model editor.
Heumann, F.K.; Wilkinson, J.C.; Wooding, D.R.
1997-12-16
A remote appliance for supporting a tool for performing work at a work site on a substantially circular bore of a work piece and for providing video signals of the work site to a remote monitor comprises: a base plate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the base plate and positioned to roll against the bore of the work piece when the base plate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the base plate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the base plate such that the working end of the tool is positioned on the inner face side of the base plate; a camera for providing video signals of the work site to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the base plate, the camera holding means being adjustably attached to the outer face of the base plate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris. 5 figs.
Hofman, Jelle; Samson, Roeland
2014-09-01
Biomagnetic monitoring of tree leaf deposited particles has proven to be a good indicator of the ambient particulate concentration. The objective of this study is to apply this method to validate a local-scale air quality model (ENVI-met), using 96 tree crown sampling locations in a typical urban street canyon. To the best of our knowledge, the application of biomagnetic monitoring for the validation of pollutant dispersion modeling is hereby presented for the first time. Quantitative ENVI-met validation showed significant correlations between modeled and measured results throughout the entire in-leaf period. ENVI-met performed much better at the first half of the street canyon close to the ring road (r=0.58-0.79, RMSE=44-49%), compared to second part (r=0.58-0.64, RMSE=74-102%). The spatial model behavior was evaluated by testing effects of height, azimuthal position, tree position and distance from the main pollution source on the obtained model results and magnetic measurements. Our results demonstrate that biomagnetic monitoring seems to be a valuable method to evaluate the performance of air quality models. Due to the high spatial and temporal resolution of this technique, biomagnetic monitoring can be applied anywhere in the city (where urban green is present) to evaluate model performance at different spatial scales. Copyright © 2014 Elsevier Ltd. All rights reserved.
Improvement of Computer Software Quality through Software Automated Tools.
1986-08-30
information that are returned from the tools to the human user, and the forms in which these outputs are presented. Page 2 of 4 STAGE OF DEVELOPMENT: What... AUTOMIATED SOFTWARE TOOL MONITORING SYSTEM APPENDIX 2 2-1 INTRODUCTION This document and Automated Software Tool Monitoring Program (Appendix 1) are...t Output Output features provide links from the tool to both the human user and the target machine (where applicable). They describe the types
NASA Astrophysics Data System (ADS)
Lorenzoni, Filippo; Casarin, Filippo; Caldon, Mauro; Islami, Kleidi; Modena, Claudio
2016-01-01
In the last decades the need for an effective seismic protection and vulnerability reduction of cultural heritage buildings and sites determined a growing interest in structural health monitoring (SHM) as a knowledge-based assessment tool to quantify and reduce uncertainties regarding their structural performance. Monitoring can be successfully implemented in some cases as an alternative to interventions or to control the medium- and long-term effectiveness of already applied strengthening solutions. The research group at the University of Padua, in collaboration with public administrations, has recently installed several SHM systems on heritage structures. The paper reports the application of monitoring strategies implemented to avoid (or at least minimize) the execution of strengthening interventions/repairs and control the response as long as a clear worsening or damaging process is detected. Two emblematic case studies are presented and discussed: the Roman Amphitheatre (Arena) of Verona and the Conegliano Cathedral. Both are excellent examples of on-going monitoring activities, performed through static and dynamic approaches in combination with automated procedures to extract meaningful structural features from collected data. In parallel to the application of innovative monitoring techniques, statistical models and data processing algorithms have been developed and applied in order to reduce uncertainties and exploit monitoring results for an effective assessment and protection of historical constructions. Processing software for SHM was implemented to perform the continuous real time treatment of static data and the identification of modal parameters based on the structural response to ambient vibrations. Statistical models were also developed to filter out the environmental effects and thermal cycles from the extracted features.
Borgatti, Antonella; Winter, Amber L; Stuebner, Kathleen; Scott, Ruth; Ober, Christopher P; Anderson, Kari L; Feeney, Daniel A; Vallera, Daniel A; Koopmeiners, Joseph S; Modiano, Jaime F; Froelich, Jerry
2017-01-01
Positron Emission Tomography-Computed Tomography (PET-CT) is routinely used for staging and monitoring of human cancer patients and is becoming increasingly available in veterinary medicine. In this study, 18-fluorodeoxyglucose (18FDG)-PET-CT was used in dogs with naturally occurring splenic hemangiosarcoma (HSA) to assess its utility as a staging and monitoring modality as compared to standard radiography and ultrasonography. Nine dogs with stage-2 HSA underwent 18FDG-PET-CT following splenectomy and prior to commencement of chemotherapy. Routine staging (thoracic radiography and abdominal ultrasonography) was performed prior to 18FDG-PET-CT in all dogs. When abnormalities not identified on routine tests were noted on 18FDG-PET-CT, owners were given the option to repeat a PET-CT following treatment with eBAT. A PET-CT scan was repeated on Day 21 in three dogs. Abnormalities not observed on conventional staging tools, and most consistent with malignant disease based on location, appearance, and outcome, were detected in two dogs and included a right atrial mass and a hepatic nodule, respectively. These lesions were larger and had higher metabolic activity on the second scans. 18FDG-PET-CT has potential to provide important prognostic information and influence treatment recommendations for dogs with stage-2 HSA. Additional studies will be needed to precisely define the value of this imaging tool for staging and therapy monitoring in dogs with this and other cancers.
Winter, Amber L.; Stuebner, Kathleen; Scott, Ruth; Ober, Christopher P.; Anderson, Kari L.; Feeney, Daniel A.; Vallera, Daniel A.; Koopmeiners, Joseph S.; Modiano, Jaime F.; Froelich, Jerry
2017-01-01
Positron Emission Tomography-Computed Tomography (PET-CT) is routinely used for staging and monitoring of human cancer patients and is becoming increasingly available in veterinary medicine. In this study, 18-fluorodeoxyglucose (18FDG)-PET-CT was used in dogs with naturally occurring splenic hemangiosarcoma (HSA) to assess its utility as a staging and monitoring modality as compared to standard radiography and ultrasonography. Nine dogs with stage-2 HSA underwent 18FDG-PET-CT following splenectomy and prior to commencement of chemotherapy. Routine staging (thoracic radiography and abdominal ultrasonography) was performed prior to 18FDG-PET-CT in all dogs. When abnormalities not identified on routine tests were noted on 18FDG-PET-CT, owners were given the option to repeat a PET-CT following treatment with eBAT. A PET-CT scan was repeated on Day 21 in three dogs. Abnormalities not observed on conventional staging tools, and most consistent with malignant disease based on location, appearance, and outcome, were detected in two dogs and included a right atrial mass and a hepatic nodule, respectively. These lesions were larger and had higher metabolic activity on the second scans. 18FDG-PET-CT has potential to provide important prognostic information and influence treatment recommendations for dogs with stage-2 HSA. Additional studies will be needed to precisely define the value of this imaging tool for staging and therapy monitoring in dogs with this and other cancers. PMID:28222142
Remote real-time monitoring of subsurface landfill gas migration.
Fay, Cormac; Doherty, Aiden R; Beirne, Stephen; Collins, Fiachra; Foley, Colum; Healy, John; Kiernan, Breda M; Lee, Hyowon; Maher, Damien; Orpen, Dylan; Phelan, Thomas; Qiu, Zhengwei; Zhang, Kirk; Gurrin, Cathal; Corcoran, Brian; O'Connor, Noel E; Smeaton, Alan F; Diamond, Dermot
2011-01-01
The cost of monitoring greenhouse gas emissions from landfill sites is of major concern for regulatory authorities. The current monitoring procedure is recognised as labour intensive, requiring agency inspectors to physically travel to perimeter borehole wells in rough terrain and manually measure gas concentration levels with expensive hand-held instrumentation. In this article we present a cost-effective and efficient system for remotely monitoring landfill subsurface migration of methane and carbon dioxide concentration levels. Based purely on an autonomous sensing architecture, the proposed sensing platform was capable of performing complex analytical measurements in situ and successfully communicating the data remotely to a cloud database. A web tool was developed to present the sensed data to relevant stakeholders. We report our experiences in deploying such an approach in the field over a period of approximately 16 months.
Remote Real-Time Monitoring of Subsurface Landfill Gas Migration
Fay, Cormac; Doherty, Aiden R.; Beirne, Stephen; Collins, Fiachra; Foley, Colum; Healy, John; Kiernan, Breda M.; Lee, Hyowon; Maher, Damien; Orpen, Dylan; Phelan, Thomas; Qiu, Zhengwei; Zhang, Kirk; Gurrin, Cathal; Corcoran, Brian; O’Connor, Noel E.; Smeaton, Alan F.; Diamond, Dermot
2011-01-01
The cost of monitoring greenhouse gas emissions from landfill sites is of major concern for regulatory authorities. The current monitoring procedure is recognised as labour intensive, requiring agency inspectors to physically travel to perimeter borehole wells in rough terrain and manually measure gas concentration levels with expensive hand-held instrumentation. In this article we present a cost-effective and efficient system for remotely monitoring landfill subsurface migration of methane and carbon dioxide concentration levels. Based purely on an autonomous sensing architecture, the proposed sensing platform was capable of performing complex analytical measurements in situ and successfully communicating the data remotely to a cloud database. A web tool was developed to present the sensed data to relevant stakeholders. We report our experiences in deploying such an approach in the field over a period of approximately 16 months. PMID:22163975
Ironmonger, Dean; Edeghere, Obaghe; Gossain, Savita; Bains, Amardeep; Hawkey, Peter M
2013-10-01
Antimicrobial resistance (AMR) is recognized as one of the most significant threats to human health. Local and regional AMR surveillance enables the monitoring of temporal changes in susceptibility to antibiotics and can provide prescribing guidance to healthcare providers to improve patient management and help slow the spread of antibiotic resistance in the community. There is currently a paucity of routine community-level AMR surveillance information. The HPA in England sponsored the development of an AMR surveillance system (AmSurv) to collate local laboratory reports. In the West Midlands region of England, routine reporting of AMR data has been established via the AmSurv system from all diagnostic microbiology laboratories. The HPA Regional Epidemiology Unit developed a web-enabled database application (AmWeb) to provide microbiologists, pharmacists and other stakeholders with timely access to AMR data using user-configurable reporting tools. AmWeb was launched in the West Midlands in January 2012 and is used by microbiologists and pharmacists to monitor resistance profiles, perform local benchmarking and compile data for infection control reports. AmWeb is now being rolled out to all English regions. It is expected that AmWeb will become a valuable tool for monitoring the threat from newly emerging or currently circulating resistant organisms and helping antibiotic prescribers to select the best treatment options for their patients.
Faurholt-Jepsen, Maria; Munkholm, Klaus; Frost, Mads; Bardram, Jakob E; Kessing, Lars Vedel
2016-01-15
Various paper-based mood charting instruments are used in the monitoring of symptoms in bipolar disorder. During recent years an increasing number of electronic self-monitoring tools have been developed. The objectives of this systematic review were 1) to evaluate the validity of electronic self-monitoring tools as a method of evaluating mood compared to clinical rating scales for depression and mania and 2) to investigate the effect of electronic self-monitoring tools on clinically relevant outcomes in bipolar disorder. A systematic review of the scientific literature, reported according to the Preferred Reporting items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines was conducted. MEDLINE, Embase, PsycINFO and The Cochrane Library were searched and supplemented by hand search of reference lists. Databases were searched for 1) studies on electronic self-monitoring tools in patients with bipolar disorder reporting on validity of electronically self-reported mood ratings compared to clinical rating scales for depression and mania and 2) randomized controlled trials (RCT) evaluating electronic mood self-monitoring tools in patients with bipolar disorder. A total of 13 published articles were included. Seven articles were RCTs and six were longitudinal studies. Electronic self-monitoring of mood was considered valid compared to clinical rating scales for depression in six out of six studies, and in two out of seven studies compared to clinical rating scales for mania. The included RCTs primarily investigated the effect of heterogeneous electronically delivered interventions; none of the RCTs investigated the sole effect of electronic mood self-monitoring tools. Methodological issues with risk of bias at different levels limited the evidence in the majority of studies. Electronic self-monitoring of mood in depression appears to be a valid measure of mood in contrast to self-monitoring of mood in mania. There are yet few studies on the effect of electronic self-monitoring of mood in bipolar disorder. The evidence of electronic self-monitoring is limited by methodological issues and by a lack of RCTs. Although the idea of electronic self-monitoring of mood seems appealing, studies using rigorous methodology investigating the beneficial as well as possible harmful effects of electronic self-monitoring are needed.
Field Confirmation and Monitoring Tools for Aerobic Bioremediation of TBA and MTBE
NASA Astrophysics Data System (ADS)
North, K.; Rasa, E.; Mackay, D. M.; Scow, K. M.; Hristova, K. R.
2009-12-01
We have been investigating in situ biotreatment of an existing tert-butyl alcohol (TBA) plume at Vandenberg AFB by recirculation/oxygenation and evaluating monitoring tools for microbial community composition and activity inside and outside of the treatment zone. Results indicate that recirculation/oxygenation by two pairs of recirculation wells is effective at adding oxygen and decreasing methyl tert-butyl ether (MTBE) and TBA concentrations to detection limits along the flowpaths predicted. Compound-specific isotope analyses (CSIA) of groundwater and microbial community analyses (extraction and analysis of DNA) of groundwater and sediments are underway for sampling locations along flowpaths inside and outside of the treatment zone to seek confirmation of in situ biodegradation. We are also evaluating a novel approach to compare the performance of microbial “traps” in characterizing microbial communities: groundwater from the aerobic treatment zone is extracted, separated and directed to multiple chambers located in an air-conditioned ex situ experimental setup. The “traps” under evaluation are in separate chambers; influent and effluent are monitored. The traps being evaluated include Bio-Trap® housings containing Bio-Sep® beads baited with MTBE or TBA labeled with 13C and various unbaited materials. Insights from the various monitoring approaches will be discussed and compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Létourneau, Daniel, E-mail: daniel.letourneau@rmp.uh.on.ca; McNiven, Andrea; Keller, Harald
2014-12-15
Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods:more » The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves and the majority of the apparent leaf motion was attributed to beam spot displacements between irradiations. The MLC QC test was performed 193 and 162 times over the monitoring period for the studied units and recalibration had to be repeated up to three times on one of these units. For both units, rate of MLC interlocks was moderately associated with MLC servicing events. The strongest association with the MLC performance was observed between the MLC servicing events and the total number of out-of-control leaves. The average elapsed time for which the number of out-of-specification or out-of-control leaves was within a given performance threshold was computed and used to assess adequacy of MLC test frequency. Conclusions: A MLC performance monitoring system has been developed and implemented to acquire high-quality QC data at high frequency. This is enabled by the relatively short acquisition time for the images and automatic image analysis. The monitoring system was also used to record and track the rate of MLC-related interlocks and servicing events. MLC performances for two commercially available MLC models have been assessed and the results support monthly test frequency for widely accepted ±1 mm specifications. Higher QC test frequency is however required to maintain tighter specification and in-control behavior.« less
Is it worth changing pattern recognition methods for structural health monitoring?
NASA Astrophysics Data System (ADS)
Bull, L. A.; Worden, K.; Cross, E. J.; Dervilis, N.
2017-05-01
The key element of this work is to demonstrate alternative strategies for using pattern recognition algorithms whilst investigating structural health monitoring. This paper looks to determine if it makes any difference in choosing from a range of established classification techniques: from decision trees and support vector machines, to Gaussian processes. Classification algorithms are tested on adjustable synthetic data to establish performance metrics, then all techniques are applied to real SHM data. To aid the selection of training data, an informative chain of artificial intelligence tools is used to explore an active learning interaction between meaningful clusters of data.
Automated flight test management system
NASA Technical Reports Server (NTRS)
Hewett, M. D.; Tartt, D. M.; Agarwal, A.
1991-01-01
The Phase 1 development of an automated flight test management system (ATMS) as a component of a rapid prototyping flight research facility for artificial intelligence (AI) based flight concepts is discussed. The ATMS provides a flight engineer with a set of tools that assist in flight test planning, monitoring, and simulation. The system is also capable of controlling an aircraft during flight test by performing closed loop guidance functions, range management, and maneuver-quality monitoring. The ATMS is being used as a prototypical system to develop a flight research facility for AI based flight systems concepts at NASA Ames Dryden.
Customizable tool for ecological data entry, assessment, monitoring, and interpretation
USDA-ARS?s Scientific Manuscript database
The Database for Inventory, Monitoring and Assessment (DIMA) is a highly customizable tool for data entry, assessment, monitoring, and interpretation. DIMA is a Microsoft Access database that can easily be used without Access knowledge and is available at no cost. Data can be entered for common, nat...
Mitty, Ethel; Flores, Sandi
2007-01-01
More than half the states permit assistance with or administration of medications by unlicensed assistive personnel or med techs. Authorization of this nursing activity (or task) is more likely because of state assisted living regulation than by support and approval of the state Board of Nursing. In many states, the definition of "assistance with" reads exactly like "administration of" thereby raising concern with regard to delegation, accountability, and liability for practice. It is, as well, a hazardous path for the assisted living nurse who must monitor and evaluate the performance of the individual performing this nursing task. This article, the second in a series on medication management, addresses delegation, standards of practice of medication administration, types of medication errors, the components of a performance evaluation tool, and a culture of safety. Maintaining professional standards of assisted living nursing practice courses throughout the suggested recommendations.
The design of an intelligent human-computer interface for the test, control and monitor system
NASA Technical Reports Server (NTRS)
Shoaff, William D.
1988-01-01
The graphical intelligence and assistance capabilities of a human-computer interface for the Test, Control, and Monitor System at Kennedy Space Center are explored. The report focuses on how a particular commercial off-the-shelf graphical software package, Data Views, can be used to produce tools that build widgets such as menus, text panels, graphs, icons, windows, and ultimately complete interfaces for monitoring data from an application; controlling an application by providing input data to it; and testing an application by both monitoring and controlling it. A complete set of tools for building interfaces is described in a manual for the TCMS toolkit. Simple tools create primitive widgets such as lines, rectangles and text strings. Intermediate level tools create pictographs from primitive widgets, and connect processes to either text strings or pictographs. Other tools create input objects; Data Views supports output objects directly, thus output objects are not considered. Finally, a set of utilities for executing, monitoring use, editing, and displaying the content of interfaces is included in the toolkit.
Le Neindre, Aymeric; Mongodi, Silvia; Philippart, François; Bouhemad, Bélaïd
2016-02-01
The use of diagnostic ultrasound by physiotherapists is not a new concept; it is frequently performed in musculoskeletal physiotherapy. Physiotherapists currently lack accurate, reliable, sensitive, and valid measurements for the assessment of the indications and effectiveness of chest physiotherapy. Thoracic ultrasound may be a promising tool for the physiotherapist and could be routinely performed at patients' bedsides to provide real-time and accurate information on the status of pleura, lungs, and diaphragm; this would allow for assessment of lung aeration from interstitial syndrome to lung consolidation with much better accuracy than chest x-rays or auscultation. Diaphragm excursion and contractility may also be assessed by ultrasound. This narrative review refers to lung and diaphragm ultrasound semiology and describes how physiotherapists could use this tool in their clinical decision-making processes in various cases of respiratory disorders. The use of thoracic ultrasound semiology alongside typical examinations may allow for the guiding, monitoring, and evaluating of chest physiotherapy treatments. Thoracic ultrasound is a potential new tool for physiotherapists. Copyright © 2015 Elsevier Inc. All rights reserved.
Logistics and the Fight -- Lessons from Napoleon
2011-04-07
Napoleon N/A Sb. GRANT NUMBER N/A Sc. PROGRAM ELEMENT NUMBER N/A 6. AUTHOR( S ) 5d. PROJECT NUMBER LCDR Sean W. Toole, SC, USN N/A Se. TASK NUMBER N...A Sf. WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION USMC Command and Staff College REPORT...NUMBER Marine Corps University N/A 2076 South Street Quantico, VA 22134-5068 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR
Behavioral Health and Performance (BHP) Work-Rest Cycles
NASA Technical Reports Server (NTRS)
Leveton, Lauren B.; Whitmire, Alexandra
2011-01-01
BHP Program Element Goal: Identify, characterize, and prevent or reduce behavioral health and performance risks associated with space travel, exploration and return to terrestrial life. BHP Requirements: a) Characterize and assess risks (e.g., likelihood and consequences). b) Develop tools and technologies to prevent, monitor, and treat adverse outcomes. c) Inform standards. d) Develop technologies to: 1) reduce risks and human systems resource requirements (e.g., crew time, mass, volume, power) and 2) ensure effective human-system integration across exploration mission.
Dietary Adherence Monitoring Tool for Free-living, Controlled Feeding Studies
USDA-ARS?s Scientific Manuscript database
Objective: To devise a dietary adherence monitoring tool for use in controlled human feeding trials involving free-living study participants. Methods: A scoring tool was devised to measure and track dietary adherence for an 8-wk randomized trial evaluating the effects of two different dietary patter...
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
Let your fingers do the walking: The projects most invaluable tool
NASA Technical Reports Server (NTRS)
Zirk, Deborah A.
1993-01-01
The barrage of information pertaining to the software being developed for a project can be overwhelming. Current status information, as well as the statistics and history of software releases, should be 'at the fingertips' of project management and key technical personnel. This paper discusses the development, configuration, capabilities, and operation of a relational database, the System Engineering Database (SEDB) which was designed to assist management in monitoring of the tasks performed by the Network Control Center (NCC) Project. This database has proven to be an invaluable project tool and is utilized daily to support all project personnel.
Tool wear modeling using abductive networks
NASA Astrophysics Data System (ADS)
Masory, Oren
1992-09-01
A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.
Investigation of Gear and Bearing Fatigue Damage Using Debris Particle Distributions
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Lewicki, David G.; Decker, Harry J.
2004-01-01
A diagnostic tool was developed for detecting fatigue damage to spur gears, spiral bevel gears, and rolling element bearings. This diagnostic tool was developed and evaluated experimentally by collecting oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Rig, Spiral Bevel Gear Test Facility, and the 500hp Helicopter Transmission Test Stand. During each test, data from an online, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of pitting damage. Results indicate oil debris alone cannot discriminate between bearing and gear fatigue damage.
Timmer, M A; Gouw, S C; Feldman, B M; Zwagemaker, A; de Kleijn, P; Pisters, M F; Schutgens, R E G; Blanchette, V; Srivastava, A; David, J A; Fischer, K; van der Net, J
2018-03-01
Monitoring clinical outcome in persons with haemophilia (PWH) is essential in order to provide optimal treatment for individual patients and compare effectiveness of treatment strategies. Experience with measurement of activities and participation in haemophilia is limited and consensus on preferred tools is lacking. The aim of this study was to give a comprehensive overview of the measurement properties of a selection of commonly used tools developed to assess activities and participation in PWH. Electronic databases were searched for articles that reported on reliability, validity or responsiveness of predetermined measurement tools (5 self-reported and 4 performance based measurement tools). Methodological quality of the studies was assessed according to the COSMIN checklist. Best evidence synthesis was used to summarize evidence on the measurement properties. The search resulted in 3453 unique hits. Forty-two articles were included. The self-reported Haemophilia Acitivity List (HAL), Pediatric HAL (PedHAL) and the performance based Functional Independence Score in Haemophilia (FISH) were studied most extensively. Methodological quality of the studies was limited. Measurement error, cross-cultural validity and responsiveness have been insufficiently evaluated. Albeit based on limited evidence, the measurement properties of the PedHAL, HAL and FISH are currently considered most satisfactory. Further research needs to focus on measurement error, responsiveness, interpretability and cross-cultural validity of the self-reported tools and validity of performance based tools which are able to assess limitations in sports and leisure activities. © 2018 The Authors. Haemophilia Published by John Wiley & Sons Ltd.
Selliah, S S; Cussion, S; MacPherson, K A; Reiner, E J; Toner, D
2001-06-01
Matrix-matched environmental certified reference materials (CRMs) are one of the most useful tools to validate analytical methods, assess analytical laboratory performance and to assist in the resolution of data conflicts between laboratories. This paper describes the development of a lake sediment as a CRM for polychorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like polychlorinated biphenyls (DLPCBs). The presence of DLPCBs in the environment is of increased concern and analytical methods are being developed internationally for monitoring DLPCBs in the environment. This paper also reports the results of an international interlaboratory study involving thirty-five laboratories from seventeen countries, conducted to characterize and validate levels of a sediment reference material for PCDDs, PCDFs and DLPCBs.
Cheung, Carol C; D'Arrigo, Corrado; Dietel, Manfred; Francis, Glenn D; Fulton, Regan; Gilks, C Blake; Hall, Jacqueline A; Hornick, Jason L; Ibrahim, Merdol; Marchetti, Antonio; Miller, Keith; van Krieken, J Han; Nielsen, Soren; Swanson, Paul E; Taylor, Clive R; Vyberg, Mogens; Zhou, Xiaoge; Torlakovic, Emina E
2017-04-01
The numbers of diagnostic, prognostic, and predictive immunohistochemistry (IHC) tests are increasing; the implementation and validation of new IHC tests, revalidation of existing tests, as well as the on-going need for daily quality assurance monitoring present significant challenges to clinical laboratories. There is a need for proper quality tools, specifically tissue tools that will enable laboratories to successfully carry out these processes. This paper clarifies, through the lens of laboratory tissue tools, how validation, verification, and revalidation of IHC tests can be performed in order to develop and maintain high quality "fit-for-purpose" IHC testing in the era of precision medicine. This is the final part of the 4-part series "Evolution of Quality Assurance for Clinical Immunohistochemistry in the Era of Precision Medicine."
Active Low Intrusion Hybrid Monitor for Wireless Sensor Networks
Navia, Marlon; Campelo, Jose C.; Bonastre, Alberto; Ors, Rafael; Capella, Juan V.; Serrano, Juan J.
2015-01-01
Several systems have been proposed to monitor wireless sensor networks (WSN). These systems may be active (causing a high degree of intrusion) or passive (low observability inside the nodes). This paper presents the implementation of an active hybrid (hardware and software) monitor with low intrusion. It is based on the addition to the sensor node of a monitor node (hardware part) which, through a standard interface, is able to receive the monitoring information sent by a piece of software executed in the sensor node. The intrusion on time, code, and energy caused in the sensor nodes by the monitor is evaluated as a function of data size and the interface used. Then different interfaces, commonly available in sensor nodes, are evaluated: serial transmission (USART), serial peripheral interface (SPI), and parallel. The proposed hybrid monitor provides highly detailed information, barely disturbed by the measurement tool (interference), about the behavior of the WSN that may be used to evaluate many properties such as performance, dependability, security, etc. Monitor nodes are self-powered and may be removed after the monitoring campaign to be reused in other campaigns and/or WSNs. No other hardware-independent monitoring platforms with such low interference have been found in the literature. PMID:26393604
NASA Astrophysics Data System (ADS)
Weltzin, J. F.; Scully, R. A.; Bayer, J.
2016-12-01
Individual natural resource monitoring programs have evolved in response to different organizational mandates, jurisdictional needs, issues and questions. We are establishing a collaborative forum for large-scale, long-term monitoring programs to identify opportunities where collaboration could yield efficiency in monitoring design, implementation, analyses, and data sharing. We anticipate these monitoring programs will have similar requirements - e.g. survey design, standardization of protocols and methods, information management and delivery - that could be met by enterprise tools to promote sustainability, efficiency and interoperability of information across geopolitical boundaries or organizational cultures. MonitoringResources.org, a project of the Pacific Northwest Aquatic Monitoring Partnership, provides an on-line suite of enterprise tools focused on aquatic systems in the Pacific Northwest Region of the United States. We will leverage on and expand this existing capacity to support continental-scale monitoring of both aquatic and terrestrial systems. The current stakeholder group is focused on programs led by bureaus with the Department of Interior, but the tools will be readily and freely available to a broad variety of other stakeholders. Here, we report the results of two initial stakeholder workshops focused on (1) establishing a collaborative forum of large scale monitoring programs, (2) identifying and prioritizing shared needs, (3) evaluating existing enterprise resources, (4) defining priorities for development of enhanced capacity for MonitoringResources.org, and (5) identifying a small number of pilot projects that can be used to define and test development requirements for specific monitoring programs.
The evolution of monitoring system: the INFN-CNAF case study
NASA Astrophysics Data System (ADS)
Bovina, Stefano; Michelotto, Diego
2017-10-01
Over the past two years, the operations at CNAF, the ICT center of the Italian Institute for Nuclear Physics, have undergone significant changes. The adoption of configuration management tools, such as Puppet, and the constant increase of dynamic and cloud infrastructures have led us to investigate a new monitoring approach. The present work deals with the centralization of the monitoring service at CNAF through a scalable and highly configurable monitoring infrastructure. The selection of tools has been made taking into account the following requirements given by users: (I) adaptability to dynamic infrastructures, (II) ease of configuration and maintenance, capability to provide more flexibility, (III) compatibility with existing monitoring system, (IV) re-usability and ease of access to information and data. In the paper, the CNAF monitoring infrastructure and its related components are hereafter described: Sensu as monitoring router, InfluxDB as time series database to store data gathered from sensors, Uchiwa as monitoring dashboard and Grafana as a tool to create dashboards and to visualize time series metrics.
ERIC Educational Resources Information Center
Cassidy-Floyd, Juliet
2017-01-01
Florida, from 1971 to 2014 has used the Florida Comprehensive Assessment Test (FCAT) as a yearly accountability tool throughout the education system in the state (Bureau of K-12 Assessment, 2005). Schools use their own assessments to determine if students are making progress throughout the year. In one school district within Florida, Performance…
A Security Monitoring Framework For Virtualization Based HEP Infrastructures
NASA Astrophysics Data System (ADS)
Gomez Ramirez, A.; Martinez Pedreira, M.; Grigoras, C.; Betev, L.; Lara, C.; Kebschull, U.;
2017-10-01
High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
Human Factors and Modeling Methods in the Development of Control Room Modernization Concepts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugo, Jacques V.; Slay III, Lorenzo
nuclear power plants. Although the nuclear industry has made steady improvement in outage optimization, each day of a refueling outage still represents an opportunity to save millions of dollars and each day an outage extends past its planned end date represents millions of dollars that may have been spent unnecessarily. Reducing planned outage duration or preventing outage extensions requires careful management of the outage schedule as well as constant oversight and monitoring of work completion during the outage execution. During a typical outage, there are typically more than 10,000 activities on the schedule that, if not managed efficiently, may causemore » expensive outage delays. Management of outages currently relies largely on paper-based resources and general-purpose office software. A typical tool currently used to monitor work performance is a burn-down curve, where total remaining activities are plotted against the baseline schedule to track bulk work completion progress. While these tools are useful, there is still considerable uncertainty during a typical outage that bulk work progress is adequate and therefore a lot of management time is spent analyzing the situation on a daily basis. This paper describes recent advances made in developing a framework for the design of visual outage information presentation, as well as an overview of the scientific principles that informed the development of the visualizations. To test the utility of advanced visual outage information presentation, an outage management dashboard software application was created as part of the Department of Energy’s Advanced Outage Control Center project. This dashboard is intended to present all the critical information an outage manager would need to understand the current status of a refueling outage. The dashboard presents the critical path, bulk work performance, key performance indicators, outage milestones and metrics relating current performance to historical performance. Additionally, the dashboard includes data analysis tools to allow outage managers to drill down into the underlying data to understand the drivers of the indicators.« less
Symbolic Constraint Maintenance Grid
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Version 3.1 of Symbolic Constraint Maintenance Grid (SCMG) is a software system that provides a general conceptual framework for utilizing pre-existing programming techniques to perform symbolic transformations of data. SCMG also provides a language (and an associated communication method and protocol) for representing constraints on the original non-symbolic data. SCMG provides a facility for exchanging information between numeric and symbolic components without knowing the details of the components themselves. In essence, it integrates symbolic software tools (for diagnosis, prognosis, and planning) with non-artificial-intelligence software. SCMG executes a process of symbolic summarization and monitoring of continuous time series data that are being abstractly represented as symbolic templates of information exchange. This summarization process enables such symbolic- reasoning computing systems as artificial- intelligence planning systems to evaluate the significance and effects of channels of data more efficiently than would otherwise be possible. As a result of the increased efficiency in representation, reasoning software can monitor more channels and is thus able to perform monitoring and control functions more effectively.
NASA Astrophysics Data System (ADS)
Pagliarone, C. E.; Uttaro, S.; Cappelli, L.; Fallone, M.; Kartal, S.
2017-02-01
CAT, Cryogenic Analysis Tools is a software package developed using LabVIEW and ROOT environments to analyze the performances of large size cryostats, where many parameters, input, and control variables need to be acquired and studied at the same time. The present paper describes how CAT works and which are the main improvements achieved in the new version: CAT 2. New Graphical User Interfaces have been developed in order to make the use of the full package more user-friendly as well as a process of resource optimization has been carried out. The offline analysis of the full cryostat performances is available both trough ROOT line command interface band also by using the new graphical interfaces.
Bayer, Jennifer M.; Weltzin, Jake F.; Scully, Rebecca A.
2017-01-01
Objectives of the workshop were: 1) identify resources that support natural resource monitoring programs working across the data life cycle; 2) prioritize desired capacities and tools to facilitate monitoring design and implementation; 3) identify standards and best practices that improve discovery, accessibility, and interoperability of data across programs and jurisdictions; and 4) contribute to an emerging community of practice focused on natural resource monitoring.
Design tool for inventory and monitoring
Charles T. Scott; Renate Bush
2009-01-01
Forest survey planning typically begins by determining the area to be sampled and the attributes to be measured. All too often the data are collected but underutilized because they did not address the critical management questions. The Design Tool for Inventory and Monitoring (DTIM) is being developed by the National Inventory and Monitoring Applications Center in...
There is increasing demand for the implementation of effects-based monitoring and surveillance (EBMS) approaches in the Great Lakes Basin to complement traditional chemical monitoring. Herein, we describe an ongoing multiagency effort to develop and implement EBMS tools, particul...
Web-Based Mathematics Progress Monitoring in Second Grade
ERIC Educational Resources Information Center
Salaschek, Martin; Souvignier, Elmar
2014-01-01
We examined a web-based mathematics progress monitoring tool for second graders. The tool monitors the learning progress of two competences, number sense and computation. A total of 414 students from 19 classrooms in Germany were checked every 3 weeks from fall to spring. Correlational analyses indicate that alternate-form reliability was adequate…
On-line tool breakage monitoring of vibration tapping using spindle motor current
NASA Astrophysics Data System (ADS)
Li, Guangjun; Lu, Huimin; Liu, Gang
2008-10-01
Input current of driving motor has been employed successfully as monitoring the cutting state in manufacturing processes for more than a decade. In vibration tapping, however, the method of on-line monitoring motor electric current has not been reported. In this paper, a tap failure prediction method is proposed to monitor the vibration tapping process using the electrical current signal of the spindle motor. The process of vibration tapping is firstly described. Then the relationship between the torque of vibration tapping and the electric current of motor is investigated by theoretic deducing and experimental measurement. According to those results, a monitoring method of tool's breakage is proposed through monitoring the ratio of the current amplitudes during adjacent vibration tapping periods. Finally, a low frequency vibration tapping system with motor current monitoring is built up using a servo motor B-106B and its driver CR06. The proposed method has been demonstrated with experiment data of vibration tapping in titanic alloys. The result of experiments shows that the method, which can avoid the tool breakage and giving a few error alarms when the threshold of amplitude ratio is 1.2 and there is at least 2 times overrun among 50 adjacent periods, is feasible for tool breakage monitoring in the process of vibration tapping small thread holes.
A UK medical devices regulator's perspective on registries.
Wilkinson, John; Crosbie, Andy
2016-04-01
Registries are powerful tools to support manufacturers in the fulfilment of their obligations to perform post-market surveillance and post-market clinical follow-up of implantable medical devices. They are also a valuable resource for regulators in support of regulatory action as well as in providing information around the safety of new and innovative technologies. Registries can provide valuable information on the relative performance of both generic types and manufacturer's individual products and they complement other sources of information about device performance such as post-market clinical studies and adverse incident reporting. This paper describes the experience of the UK medical device regulator - the Medicines and Healthcare Products Regulatory Agency (MHRA) - of working with registries to monitor the safety and performance of medical devices. Based upon this experience, the authors identify a number of attributes for a registry which they consider to be key if the registry is to contribute effectively to the work of regulators on patient safety monitoring and medical device regulation.
Optimal distribution of borehole geophones for monitoring CO2-injection-induced seismicity
NASA Astrophysics Data System (ADS)
Huang, L.; Chen, T.; Foxall, W.; Wagoner, J. L.
2016-12-01
The U.S. DOE initiative, National Risk Assessment Partnership (NRAP), aims to develop quantitative risk assessment methodologies for carbon capture, utilization and storage (CCUS). As part of tasks of the Strategic Monitoring Group of NRAP, we develop a tool for optimal design of a borehole geophones distribution for monitoring CO2-injection-induced seismicity. The tool consists of a number of steps, including building a geophysical model for a given CO2 injection site, defining target monitoring regions within CO2-injection/migration zones, generating synthetic seismic data, giving acceptable uncertainties in input data, and determining the optimal distribution of borehole geophones. We use a synthetic geophysical model as an example to demonstrate the capability our new tool to design an optimal/cost-effective passive seismic monitoring network using borehole geophones. The model is built based on the geologic features found at the Kimberlina CCUS pilot site located in southern San Joaquin Valley, California. This tool can provide CCUS operators with a guideline for cost-effective microseismic monitoring of geologic carbon storage and utilization.
Approach to in-process tool wear monitoring in drilling: Application of Kalman filter theory
NASA Astrophysics Data System (ADS)
He, Ning; Zhang, Youzhen; Pan, Liangxian
1993-05-01
The two parameters often used in adaptive control, tool wear and wear rate, are the important factors affecting machinability. In this paper, it is attempted to use the modern cybernetics to solve the in-process tool wear monitoring problem by applying the Kalman filter theory to monitor drill wear quantitatively. Based on the experimental results, a dynamic model, a measuring model and a measurement conversion model suitable for Kalman filter are established. It is proved that the monitoring system possesses complete observability but does not possess complete controllability. A discriminant for selecting the characteristic parameters is put forward. The thrust force Fz is selected as the characteristic parameter in monitoring the tool wear by this discriminant. The in-process Kalman filter drill wear monitoring system composed of force sensor microphotography and microcomputer is well established. The results obtained by the Kalman filter, the common indirect measuring method and the real drill wear measured by the aid of microphotography are compared. The result shows that the Kalman filter has high precision of measurement and the real time requirement can be satisfied.
Self-Monitoring Symptoms in Glaucoma: A Feasibility Study of a Web-Based Diary Tool
McDonald, Leanne; Glen, Fiona C.; Taylor, Deanna J.
2017-01-01
Purpose. Glaucoma patients annually spend only a few hours in an eye clinic but spend more than 5000 waking hours engaged in everything else. We propose that patients could self-monitor changes in visual symptoms providing valuable between clinic information; we test the hypothesis that this is feasible using a web-based diary tool. Methods. Ten glaucoma patients with a range of visual field loss took part in an eight-week pilot study. After completing a series of baseline tests, volunteers were prompted to monitor symptoms every three days and complete a diary about their vision during daily life using a bespoke web-based diary tool. Response to an end of a study questionnaire about the usefulness of the exercise was a main outcome measure. Results. Eight of the 10 patients rated the monitoring scheme to be “valuable” or “very valuable.” Completion rate to items was excellent (96%). Themes from a qualitative synthesis of the diary entries related to behavioural aspects of glaucoma. One patient concluded that a constant focus on monitoring symptoms led to negative feelings. Conclusions. A web-based diary tool for monitoring self-reported glaucoma symptoms is practically feasible. The tool must be carefully designed to ensure participants are benefitting, and it is not increasing anxiety. PMID:28546876
Monitoring Object Library Usage and Changes
NASA Technical Reports Server (NTRS)
Owen, R. K.; Craw, James M. (Technical Monitor)
1995-01-01
The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... Proposed Rule Change to Offer Risk Management Tools Designed to Allow Member Organizations to Monitor and... of the Proposed Rule Change The Exchange proposes to offer risk management tools designed to allow... risk management tools designed to allow member organizations to monitor and address exposure to risk...
Pedagogical monitoring as a tool to reduce dropout in distance learning in family health.
de Castro E Lima Baesse, Deborah; Grisolia, Alexandra Monteiro; de Oliveira, Ana Emilia Figueiredo
2016-08-22
This paper presents the results of a study of the Monsys monitoring system, an educational support tool designed to prevent and control the dropout rate in a distance learning course in family health. Developed by UNA-SUS/UFMA, Monsys was created to enable data mining in the virtual learning environment known as Moodle. This is an exploratory study using documentary and bibliographic research and analysis of the Monsys database. Two classes (2010 and 2011) were selected as research subjects, one with Monsys intervention and the other without. The samples were matched (using a ration of 1:1) by gender, age, marital status, graduation year, previous graduation status, location and profession. Statistical analysis was performed using the chi-square test and a multivariate logistic regression model with a 5 % significance level. The findings show that the dropout rate in the class in which Monsys was not employed (2010) was 43.2 %. However, the dropout rate in the class of 2011, in which the tool was employed as a pedagogical team aid, was 30.6 %. After statistical adjustment, the Monsys monitoring system remained in correlation with the course completion variable (adjusted OR = 1.74, IC95% = 1.17-2.59; p = 0.005), suggesting that the use of the Monsys tool, isolated to the adjusted variables, can enhance the likelihood that students will complete the course. Using the chi-square test, a profile analysis of students revealed a higher completion rate among women (67.7 %) than men (52.2 %). Analysis of age demonstrated that students between 40 and 49 years dropped out the least (32.1 %) and, with regard to professional training, nurses have the lowest dropout rates (36.3 %). The use of Monsys significantly reduced the dropout, with results showing greater association between the variables denoting presence of the monitoring system and female gender.
Developing a 300C Analog Tool for EGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Normann, Randy
2015-03-23
This paper covers the development of a 300°C geothermal well monitoring tool for supporting future EGS (enhanced geothermal systems) power production. This is the first of 3 tools planed. This is an analog tool designed for monitoring well pressure and temperature. There is discussion on 3 different circuit topologies and the development of the supporting surface electronics and software. There is information on testing electronic circuits and component. One of the major components is the cable used to connect the analog tool to the surface.
NASA Astrophysics Data System (ADS)
Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah
2018-03-01
The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard
While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of themore » data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.« less
Psychological and Behavioral Health Issues of Long-Duration Space Missions
NASA Technical Reports Server (NTRS)
Eksuzian, Daniel J.
1998-01-01
It will be the responsibility of the long-duration space flight crew to take the actions necessary to maintain their health and well-being and to cope with medical emergencies without direct assistance from support personnel, including maintaining mental health and managing physiological and psychological changes that may impair decision making and performance. The Behavior and Performance Integrated Product Team at Johnson Space Center, working, within the Space Medicine, Monitoring, and Countermeasures Program, has identified critical questions pertaining to long-duration space crew behavioral health, psychological adaptation, human factors and habitability, and sleep and circadian rhythms. Among the projects addressing these questions are: the development of tools to assess cognitive functions during space missions; the development of a model of psychological adaptation in isolated and confined environments; tools and methods for selecting individuals and teams well-suited for long-duration missions; identification of mission-critical tasks and performance evaluation; and measures of sleep quality and correlation to mission performance.
Irvine, Kathryn M.; Manlove, Kezia; Hollimon, Cynthia
2012-01-01
An important consideration for long term monitoring programs is determining the required sampling effort to detect trends in specific ecological indicators of interest. To enhance the Greater Yellowstone Inventory and Monitoring Network’s water resources protocol(s) (O’Ney 2006 and O’Ney et al. 2009 [under review]), we developed a set of tools to: (1) determine the statistical power for detecting trends of varying magnitude in a specified water quality parameter over different lengths of sampling (years) and different within-year collection frequencies (monthly or seasonal sampling) at particular locations using historical data, and (2) perform periodic trend analyses for water quality parameters while addressing seasonality and flow weighting. A power analysis for trend detection is a statistical procedure used to estimate the probability of rejecting the hypothesis of no trend when in fact there is a trend, within a specific modeling framework. In this report, we base our power estimates on using the seasonal Kendall test (Helsel and Hirsch 2002) for detecting trend in water quality parameters measured at fixed locations over multiple years. We also present procedures (R-scripts) for conducting a periodic trend analysis using the seasonal Kendall test with and without flow adjustment. This report provides the R-scripts developed for power and trend analysis, tutorials, and the associated tables and graphs. The purpose of this report is to provide practical information for monitoring network staff on how to use these statistical tools for water quality monitoring data sets.
Kim, Youngseop; Choi, Eun Seo; Kwak, Wooseop; Shin, Yongjin; Jung, Woonggyu; Ahn, Yeh-Chan; Chen, Zhongping
2008-06-01
We demonstrate the use of optical coherence tomography (OCT) as a non-destructive diagnostic tool for evaluating laser-processing performance by imaging the features of a pit and a rim. A pit formed on a material at different laser-processing conditions is imaged using both a conventional scanning electron microscope (SEM) and OCT. Then using corresponding images, the geometrical characteristics of the pit are analyzed and compared. From the results, we could verify the feasibility and the potential of the application of OCT to the monitoring of the laser-processing performance.
Clinical Applications of Gastrointestinal Manometry in Children
2014-01-01
Manometry is a noninvasive diagnostic tool for identifying motility dysfunction of the gastrointestinal tract. Despite the great technical advances in monitoring motility, performance of the study in pediatric patients has several limitations that should be considered during the procedure and interpretation of the test results. This article reviews the clinical applications of conventional esophageal and anorectal manometries in children by describing a technique for performing the test. This review will develop the uniformity required for the methods of performance, the parameters for measurement, and interpretation of test results that could be applied in pediatric clinical practice. PMID:24749084
Kim, Youngseop; Choi, Eun Seo; Kwak, Wooseop; Shin, Yongjin; Jung, Woonggyu; Ahn, Yeh-Chan; Chen, Zhongping
2014-01-01
We demonstrate the use of optical coherence tomography (OCT) as a non-destructive diagnostic tool for evaluating laser-processing performance by imaging the features of a pit and a rim. A pit formed on a material at different laser-processing conditions is imaged using both a conventional scanning electron microscope (SEM) and OCT. Then using corresponding images, the geometrical characteristics of the pit are analyzed and compared. From the results, we could verify the feasibility and the potential of the application of OCT to the monitoring of the laser-processing performance. PMID:24932051
Amarasiri, Mohan; Kitajima, Masaaki; Nguyen, Thanh H; Okabe, Satoshi; Sano, Daisuke
2017-09-15
The multiple-barrier concept is widely employed in international and domestic guidelines for wastewater reclamation and reuse for microbiological risk management, in which a wastewater reclamation system is designed to achieve guideline values of the performance target of microbe reduction. Enteric viruses are one of the pathogens for which the target reduction values are stipulated in guidelines, but frequent monitoring to validate human virus removal efficacy is challenging in a daily operation due to the cumbersome procedures for virus quantification in wastewater. Bacteriophages have been the first choice surrogate for this task, because of the well-characterized nature of strains and the presence of established protocols for quantification. Here, we performed a meta-analysis to calculate the average log 10 reduction values (LRVs) of somatic coliphages, F-specific phages, MS2 coliphage and T4 phage by membrane bioreactor, activated sludge, constructed wetlands, pond systems, microfiltration and ultrafiltration. The calculated LRVs of bacteriophages were then compared with reported human enteric virus LRVs. MS2 coliphage LRVs in MBR processes were shown to be lower than those of norovirus GII and enterovirus, suggesting it as a possible validation and operational monitoring tool. The other bacteriophages provided higher LRVs compared to human viruses. The data sets on LRVs of human viruses and bacteriophages are scarce except for MBR and conventional activated sludge processes, which highlights the necessity of investigating LRVs of human viruses and bacteriophages in multiple treatment unit processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Device for Local or Remote Monitoring of Hand Rehabilitation Sessions for Rheumatic Patients
Barabino, Gianluca; Dessì, Alessia; Tradori, Iosto; Piga, Matteo; Mathieu, Alessandro; Raffo, Luigi
2014-01-01
Current clinical practice suggests that recovering the hand functionality lost or reduced by injuries, interventions and chronic diseases requires, beyond pharmacological treatments, a kinesiotherapic intervention. This form of rehabilitation consists of physical exercises adapted to the specific pathology. Its effectiveness is strongly dependent on the patient's adhesion to such a program. In this paper we present a novel device with remote monitoring capabilities expressly conceived for the needs of rheumatic patients. It comprises several sensorized tools and can be used either in an outpatient clinic for hand functional evaluation, connected to a PC, or afforded to the patient for home kinesiotherapic sessions. In the latter case, the device guides the patient in the rehabilitation session, transmitting the relevant statistics about his performance to a TCP/IP server exploiting a GSM/GPRS connection for deferred analysis. An approved clinical trial has been set up in Italy, involving 10 patients with Rheumatoid Arthritis and 10 with Systemic Sclerosis, enrolled for 12 weeks in a home rehabilitation program with the proposed device. Their evaluation has been performed with traditional methods but also with the proposed device. Subjective (hand algofunctional Dreiser's index) and objective (ROM, strength, dexterity) parameters showed a sustained improvement throughout the follow-up. The obtained results proved that the device is an effective and safe tool for assessing hand disability and monitoring kinesiotherapy exercise, portending the potential exploitability of such a methodology in clinical practice. PMID:27170875
van Riel, Piet; Combe, Bernard; Abdulganieva, Diana; Bousquet, Paola; Courtenay, Molly; Curiale, Cinzia; Gómez-Centeno, Antonio; Haugeberg, Glenn; Leeb, Burkhard; Puolakka, Kari; Ravelli, Angelo; Rintelen, Bernhard; Sarzi-Puttini, Piercarlo
2016-01-01
Treating to target by monitoring disease activity and adjusting therapy to attain remission or low disease activity has been shown to lead to improved outcomes in chronic rheumatic diseases such as rheumatoid arthritis and spondyloarthritis. Patient-reported outcomes, used in conjunction with clinical measures, add an important perspective of disease activity as perceived by the patient. Several validated PROs are available for inflammatory arthritis, and advances in electronic patient monitoring tools are helping patients with chronic diseases to self-monitor and assess their symptoms and health. Frequent patient monitoring could potentially lead to the early identification of disease flares or adverse events, early intervention for patients who may require treatment adaptation, and possibly reduced appointment frequency for those with stable disease. A literature search was conducted to evaluate the potential role of patient self-monitoring and innovative monitoring of tools in optimising disease control in inflammatory arthritis. Experience from the treatment of congestive heart failure, diabetes and hypertension shows improved outcomes with remote electronic self-monitoring by patients. In inflammatory arthritis, electronic self-monitoring has been shown to be feasible in patients despite manual disability and to be acceptable to older patients. Patients' self-assessment of disease activity using such methods correlates well with disease activity assessed by rheumatologists. This review also describes several remote monitoring tools that are being developed and used in inflammatory arthritis, offering the potential to improve disease management and reduce pressure on specialists. PMID:27933206
NASA Astrophysics Data System (ADS)
Webley, P.; Dehn, J.; Dean, K. G.; Macfarlane, S.
2010-12-01
Volcanic eruptions are a global hazard, affecting local infrastructure, impacting airports and hindering the aviation community, as seen in Europe during Spring 2010 from the Eyjafjallajokull eruption in Iceland. Here, we show how remote sensing data is used through web-based interfaces for monitoring volcanic activity, both ground based thermal signals and airborne ash clouds. These ‘web tools’, http://avo.images.alaska.edu/, provide timely availability of polar orbiting and geostationary data from US National Aeronautics and Space Administration, National Oceanic and Atmosphere Administration and Japanese Meteorological Agency satellites for the North Pacific (NOPAC) region. This data is used operationally by the Alaska Volcano Observatory (AVO) for monitoring volcanic activity, especially at remote volcanoes and generates ‘alarms’ of any detected volcanic activity and ash clouds. The webtools allow the remote sensing team of AVO to easily perform their twice daily monitoring shifts. The web tools also assist the National Weather Service, Alaska and Kamchatkan Volcanic Emergency Response Team, Russia in their operational duties. Users are able to detect ash clouds, measure the distance from the source, area and signal strength. Within the web tools, there are 40 x 40 km datasets centered on each volcano and a searchable database of all acquired data from 1993 until present with the ability to produce time series data per volcano. Additionally, a data center illustrates the acquired data across the NOPAC within the last 48 hours, http://avo.images.alaska.edu/tools/datacenter/. We will illustrate new visualization tools allowing users to display the satellite imagery within Google Earth/Maps, and ArcGIS Explorer both as static maps and time-animated imagery. We will show these tools in real-time as well as examples of past large volcanic eruptions. In the future, we will develop the tools to produce real-time ash retrievals, run volcanic ash dispersion models from detected ash clouds and develop the browser interfaces to display other remote sensing datasets, such as volcanic sulfur dioxide detection.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
The Rendezvous Monitoring Display Capabilities of the Rendezvous and Proximity Operations Program
NASA Technical Reports Server (NTRS)
Brazzel, Jack; Spehar, Pete; Clark, Fred; Foster, Chris; Eldridge, Erin
2013-01-01
The Rendezvous and Proximity Operations Program (RPOP) is a laptop computer- based relative navigation tool and piloting aid that was developed during the Space Shuttle program. RPOP displays a graphical representation of the relative motion between the target and chaser vehicles in a rendezvous, proximity operations and capture scenario. After being used in over 60 Shuttle rendezvous missions, some of the RPOP display concepts have become recognized as a minimum standard for cockpit displays for monitoring the rendezvous task. To support International Space Station (ISS) based crews in monitoring incoming visiting vehicles, RPOP has been modified to allow crews to compare the Cygnus visiting vehicle s onboard navigated state to processed range measurements from an ISS-based, crew-operated Hand Held Lidar sensor. This paper will discuss the display concepts of RPOP that have proven useful in performing and monitoring rendezvous and proximity operations.
Batch Statistical Process Monitoring Approach to a Cocrystallization Process.
Sarraguça, Mafalda C; Ribeiro, Paulo R S; Dos Santos, Adenilson O; Lopes, João A
2015-12-01
Cocrystals are defined as crystalline structures composed of two or more compounds that are solid at room temperature held together by noncovalent bonds. Their main advantages are the increase of solubility, bioavailability, permeability, stability, and at the same time retaining active pharmaceutical ingredient bioactivity. The cocrystallization between furosemide and nicotinamide by solvent evaporation was monitored on-line using near-infrared spectroscopy (NIRS) as a process analytical technology tool. The near-infrared spectra were analyzed using principal component analysis. Batch statistical process monitoring was used to create control charts to perceive the process trajectory and define control limits. Normal and non-normal operating condition batches were performed and monitored with NIRS. The use of NIRS associated with batch statistical process models allowed the detection of abnormal variations in critical process parameters, like the amount of solvent or amount of initial components present in the cocrystallization. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
New Tool to Control and Monitor Weighted Vest Training Load for Sprinting and Jumping in Soccer.
Carlos-Vivas, Jorge; Freitas, Tomás T; Cuesta, Miguel; Perez-Gomez, Jorge; De Hoyo, Moisés; Alcaraz, Pedro E
2018-04-26
Carlos-Vivas, J, Freitas, TT, Cuesta, M, Perez-Gomez, J, De Hoyo, M, and Alcaraz, PE. New tool to control and monitor weighted vest training load for sprinting and jumping in soccer. J Strength Cond Res XX(X): 000-000, 2018-The purpose of this study was to develop 2 regression equations that accurately describe the relationship between weighted vest loads and performance indicators in sprinting (i.e., maximum velocity, Vmax) and jumping (i.e., maximum height, Hmax). Also, this study aimed to investigate the effects of increasing the load on spatio-temporal variables and power development in soccer players and to determine the "optimal load" for sprinting and jumping. Twenty-five semiprofessional soccer players performed the sprint test, whereas a total of 46 completed the vertical jump test. Two different regression equations were developed for calculating the load for each exercise. The following equations were obtained: % body mass (BM) = -2.0762·%Vmax + 207.99 for the sprint and % BM = -0.7156·%Hmax + 71.588 for the vertical jump. For both sprinting and jumping, when the load increased, Vmax and Hmax decreased. The "optimal load" for resisted training using weighted vest was unclear for sprinting and close to BM for vertical jump. This study presents a new tool to individualize the training load for resisted sprinting and jumping using weighted vest in soccer players and to develop the whole force-velocity spectrum according to the objectives of the different periods of the season.
ERIC Educational Resources Information Center
Reddy, Linda A.; Dudek, Christopher M.
2014-01-01
In the era of teacher evaluation and effectiveness, assessment tools that identify and monitor educators' instruction and behavioral management practices are in high demand. The Classroom Strategies Scale (CSS) Observer Form is a multidimensional teacher progress monitoring tool designed to assess teachers' usage of instructional and behavioral…
Milk urea testing as a tool to monitor reproductive performance in Ontario dairy herds.
Godden, S M; Kelton, D F; Lissemore, K D; Walton, J S; Leslie, K E; Lumsden, J H
2001-06-01
Dairy herd improvement test-day data, including milk urea concentrations measured using infrared test method, were collected from 60 commercial Ontario Holstein dairy herds for a 13-mo period between December 1, 1995, and December 31, 1996. The objective of the study was to describe, at the cow and the group level, the relationship between DHI milk urea concentrations and reproductive performance in commercial dairy herds. When interpreted at the cow level, there was no association between milk urea and the risk for pregnancy from an insemination occurring within the 45-d period preceding test day. However, a negative curvilinear relationship existed between milk urea and the risk for pregnancy from a first, second, or third insemination event occurring within the 45-d period following test day, with the odds for pregnancy being highest when the milk urea on the test day preceding the insemination was either below 4.5 mmol/L or greater than 6.49 mmol/L, compared with a concentration between 4.5 and 6.49 mmol/L. When interpreted at the group level, there was no association between group mean milk urea for cows between 50 and 180 DIM, and the group conception rate for cows receiving a first, second, or third insemination event in the 45-d period either preceding or following test day. Thus, while DHI milk urea measurements may be useful as a management tool to improve the efficiency of production or reduce nitrogen excretion, through helping to optimize the efficiency of protein utilization, they may have limited utility as a monitoring or diagnostic tool for reproductive performance. The results of this study suggest that good fertility may be achieved across a broad range of milk urea concentrations.
NASA Astrophysics Data System (ADS)
Roessler, D.; Weber, B.; Ellguth, E.; Spazier, J.
2017-12-01
The geometry of seismic monitoring networks, site conditions and data availability as well as monitoring targets and strategies typically impose trade-offs between data quality, earthquake detection sensitivity, false detections and alert times. Network detection capabilities typically change with alteration of the seismic noise level by human activity or by varying weather and sea conditions. To give helpful information to operators and maintenance coordinators, gempa developed a range of tools to evaluate earthquake detection and network performance including qceval, npeval and sceval. qceval is a module which analyzes waveform quality parameters in real-time and deactivates and reactivates data streams based on waveform quality thresholds for automatic processing. For example, thresholds can be defined for latency, delay, timing quality, spikes and gaps count and rms. As changes in the automatic processing have a direct influence on detection quality and speed, another tool called "npeval" was designed to calculate in real-time the expected time needed to detect and locate earthquakes by evaluating the effective network geometry. The effective network geometry is derived from the configuration of stations participating in the detection. The detection times are shown as an additional layer on the map and updated in real-time as soon as the effective network geometry changes. Yet another new tool, "sceval", is an automatic module which classifies located seismic events (Origins) in real-time. sceval evaluates the spatial distribution of the stations contributing to an Origin. It confirms or rejects the status of Origins, adds comments or leaves the Origin unclassified. The comments are passed to an additional sceval plug-in where the end user can customize event types. This unique identification of real and fake events in earthquake catalogues allows to lower network detection thresholds. In real-time monitoring situations operators can limit the processing to events with unclassified Origins, reducing their workload. Classified Origins can be treated specifically by other procedures. These modules have been calibrated and fully tested by several complex seismic monitoring networks in the region of Indonesia and Northern Chile.
A Multiscale Mapping Assessment of Lake Champlain Cyanobacterial Harmful Algal Blooms
Torbick, Nathan; Corbiere, Megan
2015-01-01
Lake Champlain has bays undergoing chronic cyanobacterial harmful algal blooms that pose a public health threat. Monitoring and assessment tools need to be developed to support risk decision making and to gain a thorough understanding of bloom scales and intensities. In this research application, Landsat 8 Operational Land Imager (OLI), Rapid Eye, and Proba Compact High Resolution Imaging Spectrometer (CHRIS) images were obtained while a corresponding field campaign collected in situ measurements of water quality. Models including empirical band ratio regressions were applied to map chlorophyll-a and phycocyanin concentrations; all sensors performed well with R2 and root-mean-square error (RMSE) ranging from 0.76 to 0.88 and 0.42 to 1.51, respectively. The outcomes showed spatial patterns across the lake with problematic bays having phycocyanin concentrations >25 µg/L. An alert status metric tuned to the current monitoring protocol was generated using modeled water quality to illustrate how the remote sensing tools can inform a public health monitoring system. Among the sensors utilized in this study, Landsat 8 OLI holds the most promise for providing exposure information across a wide area given the resolutions, systematic observation strategy and free cost. PMID:26389930
NASA Astrophysics Data System (ADS)
Colla, C.; Gabrielli, E.
2017-01-01
To evaluate the complex behaviour of masonry structures under mechanical loads, numerical models are developed and continuously implemented at diverse scales, whilst, from an experimental viewpoint, laboratory standard mechanical tests are usually carried out by instrumenting the specimens via traditional measuring devices. Extracted values collected in the few points where the tools were installed are assumed to represent the behaviour of the whole specimen but this may be quite optimistic or approximate. Optical monitoring techniques may help in overcoming some of these limitations by providing full-field visualization of mechanical parameters. Photoelasticity and the more recent DIC, employed to monitor masonry columns during compression tests are here presented and a lab case study is compared listing procedures, data acquisitions, advantages and limitations. It is shown that the information recorded by traditional measuring tools must be considered limited to the specific instrumented points. Instead, DIC in particular among the optical techniques, is proving both a very precise global and local picture of the masonry performance, opening new horizons towards a deeper knowledge of this complex construction material. The applicability of an innovative DIC procedure to cultural heritage constructions is also discussed.
FFI: What it is and what it can do for you
Duncan C. Lutes; MaryBeth Keifer; Nathan C. Benson; John F. Caratti
2009-01-01
A new monitoring tool called FFI (FEAT/FIREMON Integrated) has been developed to assist managers with collection, storage and analysis of ecological information. The tool was developed through the complementary integration of two fire effects monitoring systems commonly used in the United States: FIREMON and the Fire Ecology Assessment Tool (FEAT). FFI provides...
PCD tool wear and its monitoring in machining tungsten
NASA Astrophysics Data System (ADS)
Wang, Lijiang; Zhang, Zhenlie; Sun, Qi; Liu, Pin
The views of Chinese and foreign researchers are quite different as to whether or not polycrystalline diamond (PCD) tools can machine tungsten that is used in the aerospace and electronic industries. A study is presented that shows the possibility of machining tungsten, and a new method is developed for monitoring the tool wear in production.
Plews, Daniel J; Laursen, Paul B; Stanley, Jamie; Kilding, Andrew E; Buchheit, Martin
2013-09-01
The measurement of heart rate variability (HRV) is often considered a convenient non-invasive assessment tool for monitoring individual adaptation to training. Decreases and increases in vagal-derived indices of HRV have been suggested to indicate negative and positive adaptations, respectively, to endurance training regimens. However, much of the research in this area has involved recreational and well-trained athletes, with the small number of studies conducted in elite athletes revealing equivocal outcomes. For example, in elite athletes, studies have revealed both increases and decreases in HRV to be associated with negative adaptation. Additionally, signs of positive adaptation, such as increases in cardiorespiratory fitness, have been observed with atypical concomitant decreases in HRV. As such, practical ways by which HRV can be used to monitor training status in elites are yet to be established. This article addresses the current literature that has assessed changes in HRV in response to training loads and the likely positive and negative adaptations shown. We reveal limitations with respect to how the measurement of HRV has been interpreted to assess positive and negative adaptation to endurance training regimens and subsequent physical performance. We offer solutions to some of the methodological issues associated with using HRV as a day-to-day monitoring tool. These include the use of appropriate averaging techniques, and the use of specific HRV indices to overcome the issue of HRV saturation in elite athletes (i.e., reductions in HRV despite decreases in resting heart rate). Finally, we provide examples in Olympic and World Champion athletes showing how these indices can be practically applied to assess training status and readiness to perform in the period leading up to a pinnacle event. The paper reveals how longitudinal HRV monitoring in elites is required to understand their unique individual HRV fingerprint. For the first time, we demonstrate how increases and decreases in HRV relate to changes in fitness and freshness, respectively, in elite athletes.
Neogi, Ujjwal; Gupta, Soham; Rodridges, Rashmi; Sahoo, Pravat Nalini; Rao, Shwetha D.; Rewari, Bharat B.; Shastri, Suresh; De Costa, Ayesha; Shet, Anita
2012-01-01
Background & objectives: Monitoring of HIV-infected individuals on antiretroviral treatment (ART) ideally requires periodic viral load measurements to ascertain adequate response to treatment. While plasma viral load monitoring is widely available in high-income settings, it is rarely used in resource-limited regions because of high cost and need for sophisticated sample transport. Dried blood spot (DBS) as source specimens for viral load measurement has shown promise as an alternative to plasma specimens and is likely to be a useful tool for Indian settings. The present study was undertaken to investigate the performance of DBS in HIV-1 RNA quantification against the standard plasma viral load assay. Methods: Between April-June 2011, 130 samples were collected from HIV-1-infected (n=125) and non-infected (n=5) individuals in two district clinics in southern India. HIV-1 RNA quantification was performed from DBS and plasma using Abbott m2000rt system after manual RNA extraction. Statistical analysis included correlation, regression and Bland-Altman analysis. Results: The sensitivity of DBS viral load was 97 per cent with viral loads >3.0 log10 copies/ml. Measurable viral load (>3.0 log 10 copies/ml) results obtained for the 74 paired plasma-DBS samples showed positive correlation between both the assays (r=0.96). For clinically acceptable viral load threshold values of >5,000 copies/ml, Bland-Altman plots showed acceptable limits of agreement (−0.21 to +0.8 log10 copies/ml). The mean difference was 0.29 log10 copies/ml. The cost of DBS was $2.67 lower compared to conventional plasma viral load measurement in the setting Interpretation & conclusions: The significant positive correlation with standard plasma-based assay and lower cost of DBS viral load monitoring suggest that DBS sampling can be a feasible and economical means of viral load monitoring in HIV-infected individual in India and in other resource-limited settings globally. PMID:23391790
Integrated multi sensors and camera video sequence application for performance monitoring in archery
NASA Astrophysics Data System (ADS)
Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali
2018-03-01
This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.
Ground-penetrating radar: A tool for monitoring bridge scour
Anderson, N.L.; Ismael, A.M.; Thitimakorn, T.
2007-01-01
Ground-penetrating radar (GPR) data were acquired across shallow streams and/or drainage ditches at 10 bridge sites in Missouri by maneuvering the antennae across the surface of the water and riverbank from the bridge deck, manually or by boat. The acquired two-dimensional and three-dimensional data sets accurately image the channel bottom, demonstrating that the GPR tool can be used to estimate and/or monitor water depths in shallow fluvial environments. The study results demonstrate that the GPR tool is a safe and effective tool for measuring and/or monitoring scour in proximity to bridges. The technique can be used to safely monitor scour at assigned time intervals during peak flood stages, thereby enabling owners to take preventative action prior to potential failure. The GPR tool can also be used to investigate depositional and erosional patterns over time, thereby elucidating these processes on a local scale. In certain instances, in-filled scour features can also be imaged and mapped. This information may be critically important to those engaged in bridge design. GPR has advantages over other tools commonly employed for monitoring bridge scour (reflection seismic profiling, echo sounding, and electrical conductivity probing). The tool doesn't need to be coupled to the water, can be moved rapidly across (or above) the surface of a stream, and provides an accurate depth-structure model of the channel bottom and subchannel bottom sediments. The GPR profiles can be extended across emerged sand bars or onto the shore.
Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
Advancing satellite operations with intelligent graphical monitoring systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Shirah, Gregory W.; Luczak, Edward C.
1993-01-01
For nearly twenty-five years, spacecraft missions have been operated in essentially the same manner: human operators monitor displays filled with alphanumeric text watching for limit violations or other indicators that signal a problem. The task is performed predominately by humans. Only in recent years have graphical user interfaces and expert systems been accepted within the control center environment to help reduce operator workloads. Unfortunately, the development of these systems is often time consuming and costly. At the NASA Goddard Space Flight Center (GSFC), a new domain specific expert system development tool called the Generic Spacecraft Analyst Assistant (GenSAA) has been developed. Through the use of a highly graphical user interface and point-and-click operation, GenSAA facilitates the rapid, 'programming-free' construction of intelligent graphical monitoring systems to serve as real-time, fault-isolation assistants for spacecraft analysts. Although specifically developed to support real-time satellite monitoring, GenSAA can support the development of intelligent graphical monitoring systems in a variety of space and commercial applications.
Vision training methods for sports concussion mitigation and management.
Clark, Joseph F; Colosimo, Angelo; Ellis, James K; Mangine, Robert; Bixenmann, Benjamin; Hasselfeld, Kimberly; Graman, Patricia; Elgendy, Hagar; Myer, Gregory; Divine, Jon
2015-05-05
There is emerging evidence supporting the use vision training, including light board training tools, as a concussion baseline and neuro-diagnostic tool and potentially as a supportive component to concussion prevention strategies. This paper is focused on providing detailed methods for select vision training tools and reporting normative data for comparison when vision training is a part of a sports management program. The overall program includes standard vision training methods including tachistoscope, Brock's string, and strobe glasses, as well as specialized light board training algorithms. Stereopsis is measured as a means to monitor vision training affects. In addition, quantitative results for vision training methods as well as baseline and post-testing *A and Reaction Test measures with progressive scores are reported. Collegiate athletes consistently improve after six weeks of training in their stereopsis, *A and Reaction Test scores. When vision training is initiated as a team wide exercise, the incidence of concussion decreases in players who participate in training compared to players who do not receive the vision training. Vision training produces functional and performance changes that, when monitored, can be used to assess the success of the vision training and can be initiated as part of a sports medical intervention for concussion prevention.
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
Bat detective-Deep learning tools for bat acoustic signal detection.
Mac Aodha, Oisin; Gibb, Rory; Barlow, Kate E; Browning, Ella; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R; Newson, Stuart E; Pandourski, Ivan; Parsons, Stuart; Russ, Jon; Szodoray-Paradi, Abigel; Szodoray-Paradi, Farkas; Tilova, Elena; Girolami, Mark; Brostow, Gabriel; Jones, Kate E
2018-03-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio.
Bat detective—Deep learning tools for bat acoustic signal detection
Barlow, Kate E.; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R.; Newson, Stuart E.; Pandourski, Ivan; Russ, Jon; Szodoray-Paradi, Abigel; Tilova, Elena; Girolami, Mark; Jones, Kate E.
2018-01-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio. PMID:29518076
NASA Technical Reports Server (NTRS)
Baaklini, George Y.; Smith, Kevin; Raulerson, David; Gyekenyesi, Andrew L.; Sawicki, Jerzy T.; Brasche, Lisa
2003-01-01
Tools for Engine Diagnostics is a major task in the Propulsion System Health Management area of the Single Aircraft Accident Prevention project under NASA s Aviation Safety Program. The major goal of the Aviation Safety Program is to reduce fatal aircraft accidents by 80 percent within 10 years and by 90 percent within 25 years. The goal of the Propulsion System Health Management area is to eliminate propulsion system malfunctions as a primary or contributing factor to the cause of aircraft accidents. The purpose of Tools for Engine Diagnostics, a 2-yr-old task, is to establish and improve tools for engine diagnostics and prognostics that measure the deformation and damage of rotating engine components at the ground level and that perform intermittent or continuous monitoring on the engine wing. In this work, nondestructive-evaluation- (NDE-) based technology is combined with model-dependent disk spin experimental simulation systems, like finite element modeling (FEM) and modal norms, to monitor and predict rotor damage in real time. Fracture mechanics time-dependent fatigue crack growth and damage-mechanics-based life estimation are being developed, and their potential use investigated. In addition, wireless eddy current and advanced acoustics are being developed for on-wing and just-in-time NDE engine inspection to provide deeper access and higher sensitivity to extend on-wing capabilities and improve inspection readiness. In the long run, these methods could establish a base for prognostic sensing while an engine is running, without any overt actions, like inspections. This damage-detection strategy includes experimentally acquired vibration-, eddy-current- and capacitance-based displacement measurements and analytically computed FEM-, modal norms-, and conventional rotordynamics-based models of well-defined damages and critical mass imbalances in rotating disks and rotors.
ERIC Educational Resources Information Center
Heffernon, Rick
2006-01-01
This report presents results tracked by the CAT Measures, a 21st century assessment tool for enabling policymakers to monitor "en route" performance of their public investments in science and technology research. Developed by Morrison Institute for Public Policy at Arizona State University, the CAT Measures analyze growth supporting…
Risk based monitoring (RBM) tools for clinical trials: A systematic review.
Hurley, Caroline; Shiely, Frances; Power, Jessica; Clarke, Mike; Eustace, Joseph A; Flanagan, Evelyn; Kearney, Patricia M
2016-11-01
In November 2016, the Integrated Addendum to ICH-GCP E6 (R2) will advise trial sponsors to develop a risk-based approach to clinical trial monitoring. This new process is commonly known as risk based monitoring (RBM). To date, a variety of tools have been developed to guide RBM. However, a gold standard approach does not exist. This review aims to identify and examine RBM tools. Review of published and grey literature using a detailed search-strategy and cross-checking of reference lists. This review included academic and commercial instruments that met the Organisation for Economic Co-operation and Development (OECD) classification of RBM tools. Ninety-one potential RBM tools were identified and 24 were eligible for inclusion. These tools were published between 2000 and 2015. Eight tools were paper based or electronic questionnaires and 16 operated as Service as a System (SaaS). Risk associated with the investigational medicinal product (IMP), phase of the clinical trial and study population were examined by all tools and suitable mitigation guidance through on-site and centralised monitoring was provided. RBM tools for clinical trials are relatively new, their features and use varies widely and they continue to evolve. This makes it difficult to identify the "best" RBM technique or tool. For example, equivalence testing is required to determine if RBM strategies directed by paper based and SaaS based RBM tools are comparable. Such research could be embedded within multi-centre clinical trials and conducted as a SWAT (Study within a Trial). Copyright © 2016 Elsevier Inc. All rights reserved.
Moisture Performance of High-R Wall Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Nay B.; Kochkin, Vladimir
High-performance homes offer improved comfort, lower utility bills, and assured durability. The next generation of building enclosures is a key step toward achieving high-performance goals through decreasing energy load demand and enabling advanced space-conditioning systems. Yet the adoption of high-R enclosures and particularly high-R walls has been a slow-growing trend because mainstream builders are hesitant to make the transition. In a survey of builders on this topic, one of the challenges identifi ed is an industry-wide concern about the long-term moisture performance of energy-effi cient walls. This study takes a step toward addressing this concern through direct monitoring of themore » moisture performance of high-R walls in occupied homes in several climate zones. In addition, the robustness of the design and modeling tools for selecting high-R wall solutions is evaluated using the monitored data from the field. The information and knowledge gained through this research will provide an objective basis for decision-making so that builders can implement advanced designs with confidence.« less
Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling
2012-01-01
An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.
Nowik, Patrik; Bujila, Robert; Poludniowski, Gavin; Fransson, Annette
2015-07-08
The purpose of this study was to develop a method of performing routine periodical quality controls (QC) of CT systems by automatically analyzing key performance indicators (KPIs), obtainable from images of manufacturers' quality assurance (QA) phantoms. A KPI pertains to a measurable or determinable QC parameter that is influenced by other underlying fundamental QC parameters. The established KPIs are based on relationships between existing QC parameters used in the annual testing program of CT scanners at the Karolinska University Hospital in Stockholm, Sweden. The KPIs include positioning, image noise, uniformity, homogeneity, the CT number of water, and the CT number of air. An application (MonitorCT) was developed to automatically evaluate phantom images in terms of the established KPIs. The developed methodology has been used for two years in clinical routine, where CT technologists perform daily scans of the manufacturer's QA phantom and automatically send the images to MonitorCT for KPI evaluation. In the cases where results were out of tolerance, actions could be initiated in less than 10 min. 900 QC scans from two CT scanners have been collected and analyzed over the two-year period that MonitorCT has been active. Two types of errors have been registered in this period: a ring artifact was discovered with the image noise test, and a calibration error was detected multiple times with the CT number test. In both cases, results were outside the tolerances defined for MonitorCT, as well as by the vendor. Automated monitoring of KPIs is a powerful tool that can be used to supplement established QC methodologies. Medical physicists and other professionals concerned with the performance of a CT system will, using such methods, have access to comprehensive data on the current and historical (trend) status of the system such that swift actions can be taken in order to ensure the quality of the CT examinations, patient safety, and minimal disruption of service.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
The PIPER project set out to develop methodologies and software for measurement, analysis, attribution, and presentation of performance data for extreme-scale systems. Goals of the project were to support analysis of massive multi-scale parallelism, heterogeneous architectures, multi-faceted performance concerns, and to support both post-mortem performance analysis to identify program features that contribute to problematic performance and on-line performance analysis to drive adaptation. This final report summarizes the research and development activity at Rice University as part of the PIPER project. Producing a complete suite of performance tools for exascale platforms during the course of this project was impossible since bothmore » hardware and software for exascale systems is still a moving target. For that reason, the project focused broadly on the development of new techniques for measurement and analysis of performance on modern parallel architectures, enhancements to HPCToolkit’s software infrastructure to support our research goals or use on sophisticated applications, engaging developers of multithreaded runtimes to explore how support for tools should be integrated into their designs, engaging operating system developers with feature requests for enhanced monitoring support, engaging vendors with requests that they add hardware measure- ment capabilities and software interfaces needed by tools as they design new components of HPC platforms including processors, accelerators and networks, and finally collaborations with partners interested in using HPCToolkit to analyze and tune scalable parallel applications.« less
Dual-channel (green and red) fluorescence microendoscope with subcellular resolution
NASA Astrophysics Data System (ADS)
de Paula D'Almeida, Camila; Fortunato, Thereza Cury; Teixeira Rosa, Ramon Gabriel; Romano, Renan Arnon; Moriyama, Lilian Tan; Pratavieira, Sebastião.
2018-02-01
Usually, tissue images at cellular level need biopsies to be done. Considering this, diagnostic devices, such as microendoscopes, have been developed with the purpose of do not be invasive. This study goal is the development of a dual-channel microendoscope, using two fluorescent labels: proflavine and protoporphyrin IX (PpIX), both approved by Food and Drug Administration. This system, with the potential to perform a microscopic diagnosis and to monitor a photodynamic therapy (PDT) session, uses a halogen lamp and an image fiber bundle to perform subcellular image. Proflavine fluorescence indicates the nuclei of the cell, which is the reference for PpIX localization on image tissue. Preliminary results indicate the efficacy of this optical technique to detect abnormal tissues and to improve the PDT dosimetry. This was the first time, up to our knowledge, that PpIX fluorescence was microscopically observed in vivo, in real time, combined to other fluorescent marker (Proflavine), which allowed to simultaneously observe the spatial localization of the PpIX in the mucosal tissue. We believe this system is very promising tool to monitor PDT in mucosa as it happens. Further experiments have to be performed in order to validate the system for PDT monitoring.
Campbell, Harry; el Arifeen, Shams; Hazir, Tabish; O'Kelly, James; Bryce, Jennifer; Rudan, Igor; Qazi, Shamim Ahmad
2013-01-01
Pneumonia remains a major cause of child death globally, and improving antibiotic treatment rates is a key control strategy. Progress in improving the global coverage of antibiotic treatment is monitored through large household surveys such as the Demographic and Health Surveys (DHS) and the Multiple Indicator Cluster Surveys (MICS), which estimate antibiotic treatment rates of pneumonia based on two-week recall of pneumonia by caregivers. However, these survey tools identify children with reported symptoms of pneumonia, and because the prevalence of pneumonia over a two-week period in community settings is low, the majority of these children do not have true pneumonia and so do not provide an accurate denominator of pneumonia cases for monitoring antibiotic treatment rates. In this review, we show that the performance of survey tools could be improved by increasing the survey recall period or by improving either overall discriminative power or specificity. However, even at a test specificity of 95% (and a test sensitivity of 80%), the proportion of children with reported symptoms of pneumonia who truly have pneumonia is only 22% (the positive predictive value of the survey tool). Thus, although DHS and MICS survey data on rates of care seeking for children with reported symptoms of pneumonia and other childhood illnesses remain valid and important, DHS and MICS data are not able to give valid estimates of antibiotic treatment rates in children with pneumonia. PMID:23667338
The use of mist nets as a tool for bird population monitoring
E.H. Dunn; C. John Ralph
2004-01-01
Mist nets are an important tool for population monitoring, here defi ned as assessment of species composition, relative abundance, population size, and demography. We review the strengths and limitations of mist netting for monitoring purposes, based on papers in this volume and other literature. Advantages of using mist nets over aural or visual count methods include...
Tools in a clinical information system supporting clinical trials at a Swiss University Hospital.
Weisskopf, Michael; Bucklar, Guido; Blaser, Jürg
2014-12-01
Issues concerning inadequate source data of clinical trials rank second in the most common findings by regulatory authorities. The increasing use of electronic clinical information systems by healthcare providers offers an opportunity to facilitate and improve the conduct of clinical trials and the source documentation. We report on a number of tools implemented into the clinical information system of a university hospital to support clinical research. In 2011/2012, a set of tools was developed in the clinical information system of the University Hospital Zurich to support clinical research, including (1) a trial registry for documenting metadata on the clinical trials conducted at the hospital, (2) a patient-trial-assignment-tool to tag patients in the electronic medical charts as participants of specific trials, (3) medical record templates for the documentation of study visits and trial-related procedures, (4) online queries on trials and trial participants, (5) access to the electronic medical records for clinical monitors, (6) an alerting tool to notify of hospital admissions of trial participants, (7) queries to identify potentially eligible patients in the planning phase as trial feasibility checks and during the trial as recruitment support, and (8) order sets to facilitate the complete and accurate performance of study visit procedures. The number of approximately 100 new registrations per year in the voluntary trial registry in the clinical information system now matches the numbers of the existing mandatory trial registry of the hospital. Likewise, the yearly numbers of patients tagged as trial participants as well as the use of the standardized trial record templates increased to 2408 documented trial enrolments and 190 reports generated/month in the year 2013. Accounts for 32 clinical monitors have been established in the first 2 years monitoring a total of 49 trials in 16 clinical departments. A total of 15 months after adding the optional feature of hospital admission alerts of trial participants, 107 running trials have activated this option, including 48 out of 97 studies (49.5%) registered in the year 2013, generating approximately 85 alerts per month. The popularity of the presented tools in the clinical information system illustrates their potential to facilitate the conduct of clinical trials. The tools also allow for enhanced transparency on trials conducted at the hospital. Future studies on monitoring and inspection findings will have to evaluate their impact on quality and safety. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Luczak, Edward C.
1991-01-01
Flight Operations Analysts (FOAs) in the Payload Operations Control Center (POCC) are responsible for monitoring a satellite's health and safety. As satellites become more complex and data rates increase, FOAs are quickly approaching a level of information saturation. The FOAs in the spacecraft control center for the COBE (Cosmic Background Explorer) satellite are currently using a fault isolation expert system named the Communications Link Expert Assistance Resource (CLEAR), to assist in isolating and correcting communications link faults. Due to the success of CLEAR and several other systems in the control center domain, many other monitoring and fault isolation expert systems will likely be developed to support control center operations during the early 1990s. To facilitate the development of these systems, a project was initiated to develop a domain specific tool, named the Generic Spacecraft Analyst Assistant (GenSAA). GenSAA will enable spacecraft analysts to easily build simple real-time expert systems that perform spacecraft monitoring and fault isolation functions. Lessons learned during the development of several expert systems at Goddard, thereby establishing the foundation of GenSAA's objectives and offering insights in how problems may be avoided in future project, are described. This is followed by a description of the capabilities, architecture, and usage of GenSAA along with a discussion of its application to future NASA missions.
Hall, Travis; Nguyen, Tam Q.; Mayeda, Jill C.; Lie, Paul E.; Lopez, Jerry; Banister, Ron E.
2017-01-01
It has been the dream of many scientists and engineers to realize a non-contact remote sensing system that can perform continuous, accurate and long-term monitoring of human vital signs as we have seen in many Sci-Fi movies. Having an intelligible sensor system that can measure and record key vital signs (such as heart rates and respiration rates) remotely and continuously without touching the patients, for example, can be an invaluable tool for physicians who need to make rapid life-and-death decisions. Such a sensor system can also effectively help physicians and patients making better informed decisions when patients’ long-term vital signs data is available. Therefore, there has been a lot of research activities on developing a non-contact sensor system that can monitor a patient’s vital signs and quickly transmit the information to healthcare professionals. Doppler-based radio-frequency (RF) non-contact vital signs (NCVS) monitoring system are particularly attractive for long term vital signs monitoring because there are no wires, electrodes, wearable devices, nor any contact-based sensors involved so the subjects may not be even aware of the ubiquitous monitoring. In this paper, we will provide a brief review on some latest development on NCVS sensors and compare them against a few novel and intelligent phased-array Doppler-based RF NCVS biosensors we have built in our labs. Some of our NCVS sensor tests were performed within a clutter-free anechoic chamber to mitigate the environmental clutters, while most tests were conducted within the typical Herman-Miller type office cubicle setting to mimic a more practical monitoring environment. Additionally, we will show the measurement data to demonstrate the feasibility of long-term NCVS monitoring. The measured data strongly suggests that our latest phased array NCVS system should be able to perform long-term vital signs monitoring intelligently and robustly, especially for situations where the subject is sleeping without hectic movements nearby. PMID:29140281
Hall, Travis; Lie, Donald Y C; Nguyen, Tam Q; Mayeda, Jill C; Lie, Paul E; Lopez, Jerry; Banister, Ron E
2017-11-15
It has been the dream of many scientists and engineers to realize a non-contact remote sensing system that can perform continuous, accurate and long-term monitoring of human vital signs as we have seen in many Sci-Fi movies. Having an intelligible sensor system that can measure and record key vital signs (such as heart rates and respiration rates) remotely and continuously without touching the patients, for example, can be an invaluable tool for physicians who need to make rapid life-and-death decisions. Such a sensor system can also effectively help physicians and patients making better informed decisions when patients' long-term vital signs data is available. Therefore, there has been a lot of research activities on developing a non-contact sensor system that can monitor a patient's vital signs and quickly transmit the information to healthcare professionals. Doppler-based radio-frequency (RF) non-contact vital signs (NCVS) monitoring system are particularly attractive for long term vital signs monitoring because there are no wires, electrodes, wearable devices, nor any contact-based sensors involved so the subjects may not be even aware of the ubiquitous monitoring. In this paper, we will provide a brief review on some latest development on NCVS sensors and compare them against a few novel and intelligent phased-array Doppler-based RF NCVS biosensors we have built in our labs. Some of our NCVS sensor tests were performed within a clutter-free anechoic chamber to mitigate the environmental clutters, while most tests were conducted within the typical Herman-Miller type office cubicle setting to mimic a more practical monitoring environment. Additionally, we will show the measurement data to demonstrate the feasibility of long-term NCVS monitoring. The measured data strongly suggests that our latest phased array NCVS system should be able to perform long-term vital signs monitoring intelligently and robustly, especially for situations where the subject is sleeping without hectic movements nearby.
Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G
2015-10-01
To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.
McMahon, Terry W; Newman, David G
2016-04-01
Flying a helicopter is a complex psychomotor skill. Fatigue is a serious threat to operational safety, particularly for sustained helicopter operations involving high levels of cognitive information processing and sustained time on task. As part of ongoing research into this issue, the object of this study was to develop a field-deployable helicopter-specific psychomotor vigilance test (PVT) for the purpose of daily performance monitoring of pilots. The PVT consists of a laptop computer, a hand-operated joystick, and a set of rudder pedals. Screen-based compensatory tracking task software includes a tracking ball (operated by the joystick) which moves randomly in all directions, and a second tracking ball which moves horizontally (operated by the rudder pedals). The 5-min test requires the pilot to keep both tracking balls centered. This helicopter-specific PVT's portability and integrated data acquisition and storage system enables daily field monitoring of the performance of individual helicopter pilots. The inclusion of a simultaneous foot-operated tracking task ensures divided attention for helicopter pilots as the movement of both tracking balls requires simultaneous inputs. This PVT is quick, economical, easy to use, and specific to the operational flying task. It can be used for performance monitoring purposes, and as a general research tool for investigating the psychomotor demands of helicopter operations. While reliability and validity testing is warranted, data acquired from this test could help further our understanding of the effect of various factors (such as fatigue) on helicopter pilot performance, with the potential of contributing to helicopter operational safety.
Changes in water quality along the course of a river - Classic monitoring versus patrol monitoring
NASA Astrophysics Data System (ADS)
Absalon, Damian; Kryszczuk, Paweł; Rutkiewicz, Paweł
2017-11-01
Monitoring of water quality is a tool necessary to assess the condition of waterbodies in order to properly formulate water management plans. The paper presents the results of patrol monitoring of a 40-kilometre stretch of the Oder between Racibórz and Koźle. It has been established that patrol monitoring is a good tool for verifying the distribution of points of classic stationary monitoring, particularly in areas subject to varied human impact, where tributaries of the main river are very diversified as regards hydrochemistry. For this reason the results of operational monitoring carried out once every few years may not be reliable and the presented condition of the monitored waterbodies may be far from reality.
Ultramicroelectrode Array Based Sensors: A Promising Analytical Tool for Environmental Monitoring
Orozco, Jahir; Fernández-Sánchez, César; Jiménez-Jorquera, Cecilia
2010-01-01
The particular analytical performance of ultramicroelectrode arrays (UMEAs) has attracted a high interest by the research community and has led to the development of a variety of electroanalytical applications. UMEA-based approaches have demonstrated to be powerful, simple, rapid and cost-effective analytical tools for environmental analysis compared to available conventional electrodes and standardised analytical techniques. An overview of the fabrication processes of UMEAs, their characterization and applications carried out by the Spanish scientific community is presented. A brief explanation of theoretical aspects that highlight their electrochemical behavior is also given. Finally, the applications of this transducer platform in the environmental field are discussed. PMID:22315551
Ross, Kathryn M; Wing, Rena R
2016-08-01
Despite the proliferation of newer self-monitoring technology (e.g., activity monitors and smartphone apps), their impact on weight loss outside of structured in-person behavioral intervention is unknown. A randomized, controlled pilot study was conducted to examine efficacy of self-monitoring technology, with and without phone-based intervention, on 6-month weight loss in adults with overweight and obesity. Eighty participants were randomized to receive standard self-monitoring tools (ST, n = 26), technology-based self-monitoring tools (TECH, n = 27), or technology-based tools combined with phone-based intervention (TECH + PHONE, n = 27). All participants attended one introductory weight loss session and completed assessments at baseline, 3 months, and 6 months. Weight loss from baseline to 6 months differed significantly between groups P = 0.042; there was a trend for TECH + PHONE (-6.4 ± 1.2 kg) to lose more weight than ST (-1.3 ± 1.2 kg); weight loss in TECH (-4.1 ± 1.4 kg) was between ST and TECH + PHONE. Fewer ST (15%) achieved ≥5% weight losses compared with TECH and TECH + PHONE (44%), P = 0.039. Adherence to self-monitoring caloric intake was higher in TECH + PHONE than TECH or ST, Ps < 0.05. These results suggest use of newer self-monitoring technology plus brief phone-based intervention improves adherence and weight loss compared with traditional self-monitoring tools. Further research should determine cost-effectiveness of adding phone-based intervention when providing self-monitoring technology. © 2016 The Obesity Society.
Ross, Kathryn M.; Wing, Rena R.
2016-01-01
Objective Despite the proliferation of newer self-monitoring technology (e.g., activity monitors and smartphone apps), their impact on weight loss outside of structured in-person behavioral intervention is unknown. Methods A randomized, controlled pilot study was conducted to examine efficacy of self-monitoring technology, with and without phone-based intervention, on 6-month weight loss in adults with overweight and obesity. Eighty participants were randomized to receive standard self-monitoring tools (ST, n=26), technology-based self-monitoring tools (TECH, n=27), or technology-based tools combined with phone-based intervention (TECH+PHONE, n=27). All participants attended one introductory weight loss session and completed assessments at baseline, 3 months, and 6 months. Results Weight loss from baseline to 6 months differed significantly between groups p=.042; there was a trend for TECH+PHONE (−6.4±1.2kg) to lose more weight than ST (−1.3±1.2kg); weight loss in TECH (−4.1±1.4kg) was between ST and TECH+PHONE. Fewer ST (15%) achieved ≥5% weight losses compared to TECH and TECH+PHONE (44%), p=.039. Adherence to self-monitoring caloric intake was higher in TECH+PHONE than TECH or ST, ps<.05. Conclusion These results suggest use of newer self-monitoring technology plus brief phone-based intervention improves adherence and weight loss compared to traditional self-monitoring tools. Further research should determine cost-effectiveness of adding phone-based intervention when providing self-monitoring technology. PMID:27367614
Lacunarity study of speckle patterns produced by rough surfaces
NASA Astrophysics Data System (ADS)
Dias, M. R. B.; Dornelas, D.; Balthazar, W. F.; Huguenin, J. A. O.; da Silva, L.
2017-11-01
In this work we report on the study of Lacunarity of digital speckle patterns generated by rough surfaces. The study of Lacunarity of speckle patterns was performed on both static and moving rough surfaces. The results show that the Lacunarity is sensitive to the surface roughness, which suggests that it can be used to perform indirect measurement of surface roughness as well as to monitor defects, or variations of roughness, of metallic moving surfaces. Our results show the robustness of this statistical tool applied to speckle pattern in order to study surface roughness.
New Applications of Portable Raman Spectroscopy in Agri-Bio-Photonics
NASA Astrophysics Data System (ADS)
Voronine, Dmitri; Scully, Rob; Sanders, Virgil
2014-03-01
Modern optical techniques based on Raman spectroscopy are being used to monitor and analyze the health of cattle, crops and their natural environment. These optical tools are now available to perform fast, noninvasive analysis of live animals and plants in situ. We will report new applications of a portable handheld Raman spectroscopy to identification and taxonomy of plants. In addition, detection of organic food residues will be demonstrated. Advantages and limitations of current portable instruments will be discussed with suggestions for improved performance by applying enhanced Raman spectroscopic schemes.
Llimona, Pere; Pérez, Glòria; Rodríguez-Sanz, Maica; Novoa, Ana M; Espelt, Albert; García de Olalla, Patricia; Borrell, Carme
In order to know about the health of the population, it is necessary to perform a systematic and continuous analysis of their health status and social and economic health determinants. The objective of this paper is to describe the development and implementation of the Infobarris tool, which allows to visualize a wide battery of indicators and social determinants of health by neighbourhoods in the city of Barcelona (Spain). For the development of the Infobarris tool, we used an agile methodology that allows the development of a project in iterative and incremental stages, which are the following: selection of indicators, design of the prototype, development of the tool, data loading, and tool review and improvements. Infobarris displays 64 indicators of health and its determinants through graphics, maps and tables, in a friendly, interactive and attractive way, which facilitates health surveillance in the neighbourhoods of Barcelona. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
Bozorgmehr, Kayvan; Goosen, Simone; Mohsenpour, Amir; Kuehne, Anna; Razum, Oliver; Kunst, Anton E
2017-08-08
Background: Accurate data on the health status, health behaviour and access to health care of asylum seekers is essential, but such data is lacking in many European countries. We hence aimed to: (a) develop and pilot-test an instrument that can be used to compare and benchmark the country health information systems (HIS) with respect to the ability to assess the health status and health care situation of asylum seekers and (b) present the results of that pilot for The Netherlands (NL) and Germany (DE). Materials and Methods : Reviewing and adapting existing tools, we developed a Health Information Assessment Tool on Asylum Seekers (HIATUS) with 50 items to assess HIS performance across three dimensions: (1) availability and detail of data across potential data sources; (2) HIS resources and monitoring capacity; (3) general coverage and timeliness of publications on selected indicators. We piloted HIATUS by applying the tool to the HIS in DE and NL. Two raters per country independently assessed the performance of country HIS and the inter-rater reliability was analysed by Pearson's rho and the intra-class correlation (ICC). We then applied a consensus-based group rating to obtain the final ratings which were transformed into a weighted summary score (range: 0-97). We assessed HIS performance by calculating total and domain-specific HIATUS scores by country as well as absolute and relative gaps in scores within and between countries. Results : In the independent rating, Pearson's rho was 0.14 (NL) and 0.30 (DE), the ICC yielded an estimated reliability of 0.29 (NL) and 0.83 (DE) respectively. In the final consensus-based rating, the total HIATUS score was 47 in NL and 15 in DE, translating into a relative gap in HIS capacity of 52% (NL) and 85% (DE) respectively. Shortfalls in HIS capacity in both countries relate to the areas of HIS coordination, planning and policies, and to limited coverage of specific indicators such as self-reported health, mental health, socio-economic status and health behaviour. The relative gap in the HIATUS component "data sources and availability" was much higher in Germany (92%) than in NL (28%). Conclusions : The standardised tool (HIATUS) proved useful for assessment of country HIS performance in two countries by consensus-based rating. HIATUS revealed substantial limitations in HIS capacity to assess the health situation of asylum seekers in both countries. The tool allowed for between-country comparisons, revealing that capacities were lower in DE relative to NL. Monitoring and benchmarking gaps in HIS capacity in further European countries can help to strengthen HIS in the future.
Tools for distributed application management
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Cooper, Robert; Wood, Mark; Birman, Kenneth P.
1990-01-01
Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.
Acoustic emission and nondestructive evaluation of biomaterials and tissues.
Kohn, D H
1995-01-01
Acoustic emission (AE) is an acoustic wave generated by the release of energy from localized sources in a material subjected to an externally applied stimulus. This technique may be used nondestructively to analyze tissues, materials, and biomaterial/tissue interfaces. Applications of AE include use as an early warning tool for detecting tissue and material defects and incipient failure, monitoring damage progression, predicting failure, characterizing failure mechanisms, and serving as a tool to aid in understanding material properties and structure-function relations. All these applications may be performed in real time. This review discusses general principles of AE monitoring and the use of the technique in 3 areas of importance to biomedical engineering: (1) analysis of biomaterials, (2) analysis of tissues, and (3) analysis of tissue/biomaterial interfaces. Focus in these areas is on detection sensitivity, methods of signal analysis in both the time and frequency domains, the relationship between acoustic signals and microstructural phenomena, and the uses of the technique in establishing a relationship between signals and failure mechanisms.
NASA Astrophysics Data System (ADS)
Hilliard, Antony
Energy Monitoring and Targeting is a well-established business process that develops information about utility energy consumption in a business or institution. While M&T has persisted as a worthwhile energy conservation support activity, it has not been widely adopted. This dissertation explains M&T challenges in terms of diagnosing and controlling energy consumption, informed by a naturalistic field study of M&T work. A Cognitive Work Analysis of M&T identifies structures that diagnosis can search, information flows un-supported in canonical support tools, and opportunities to extend the most popular tool for MM&T: Cumulative Sum of Residuals (CUSUM) charts. A design application outlines how CUSUM charts were augmented with a more contemporary statistical change detection strategy, Recursive Parameter Estimates, modified to better suit the M&T task using Representation Aiding principles. The design was experimentally evaluated in a controlled M&T synthetic task, and was shown to significantly improve diagnosis performance.
Kandelbauer, A; Kessler, W; Kessler, R W
2008-03-01
The laccase-catalysed transformation of indigo carmine (IC) with and without a redox active mediator was studied using online UV-visible spectroscopy. Deconvolution of the mixture spectra obtained during the reaction was performed on a model-free basis using multivariate curve resolution (MCR). Thereby, the time courses of educts, products, and reaction intermediates involved in the transformation were reconstructed without prior mechanistic assumptions. Furthermore, the spectral signature of a reactive intermediate which could not have been detected by a classical hard-modelling approach was extracted from the chemometric analysis. The findings suggest that the combined use of UV-visible spectroscopy and MCR may lead to unexpectedly deep mechanistic evidence otherwise buried in the experimental data. Thus, although rather an unspecific method, UV-visible spectroscopy can prove useful in the monitoring of chemical reactions when combined with MCR. This offers a wide range of chemists a cheap and readily available, highly sensitive tool for chemical reaction online monitoring.
NASA Astrophysics Data System (ADS)
Giama, E.; Papadopoulos, A. M.
2018-01-01
The reduction of carbon emissions has become a top priority in the decision-making process for governments and companies, the strict European legislation framework being a major driving force behind this effort. On the other hand, many companies face difficulties in estimating their footprint and in linking the results derived from environmental evaluation processes with an integrated energy management strategy, which will eventually lead to energy-efficient and cost-effective solutions. The paper highlights the need of companies to establish integrated environmental management practices, with tools such as carbon footprint analysis to monitor the energy performance of production processes. Concepts and methods are analysed, and selected indicators are presented by means of benchmarking, monitoring and reporting the results in order to be used effectively from the companies. The study is based on data from more than 90 Greek small and medium enterprises, followed by a comprehensive discussion of cost-effective and realistic energy-saving measures.
Tools for distributed application management
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Wood, Mark; Cooper, Robert; Birman, Kenneth P.
1990-01-01
Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system is described: a collection of tools for constructing distributed application management software. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real time reactive program. The underlying application is instrumented with a variety of built-in and user defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when pre-existing, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.
Aidman, Eugene; Chadunow, Carolyn; Johnson, Kayla; Reece, John
2015-08-01
Driver drowsiness has been implicated as a major causal factor in road accidents. Tools that allow remote monitoring and management of driver fatigue are used in the mining and road transport industries. Increasing drivers' own awareness of their drowsiness levels using such tools may also reduce risk of accidents. The study examined the effects of real-time blink-velocity-derived drowsiness feedback on driver performance and levels of alertness in a military setting. A sample of 15 Army Reserve personnel (1 female) aged 21-59 (M=41.3, SD=11.1) volunteered to being monitored by an infra-red oculography-based Optalert Alertness Monitoring System (OAMS) while they performed their regular driving tasks, including on-duty tasks and commuting to and from duty, for a continuous period of 4-8 weeks. For approximately half that period, blink-velocity-derived Johns Drowsiness Scale (JDS) scores were fed back to the driver in a counterbalanced repeated-measures design, resulting in a total of 419 driving periods under "feedback" and 385 periods under "no-feedback" condition. Overall, the provision of real-time feedback resulted in reduced drowsiness (lower JDS scores) and improved alertness and driving performance ratings. The effect was small and varied across the 24-h circadian cycle but it remained robust after controlling for time of day and driving task duration. Both the number of JDS peaks counted for each trip and their duration declined in the presence of drowsiness feedback, indicating a dynamic pattern that is consistent with a genuine, entropy-reducing feedback mechanism (as distinct from random re-alerting) behind the observed effect. Its mechanisms and practical utility have yet to be fully explored. Direct examination of the alternative, random re-alerting explanation of this feedback effect is an important step for future research. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Simo, Donald L.
2007-01-01
This paper presents a preliminary demonstration of an automated health assessment tool, capable of real-time on-board operation using existing engine control hardware. The tool allows operators to discern how rapidly individual turboshaft engines are degrading. As the compressor erodes, performance is lost, and with it the ability to generate power. Thus, such a tool would provide an instant assessment of the engine s fitness to perform a mission, and would help to pinpoint any abnormal wear or performance anomalies before they became serious, thereby decreasing uncertainty and enabling improved maintenance scheduling. The research described in the paper utilized test stand data from a T700-GE-401 turboshaft engine that underwent sand-ingestion testing to scale a model-based compressor efficiency degradation estimation algorithm. This algorithm was then applied to real-time Health Usage and Monitoring System (HUMS) data from a T700-GE-701C to track compressor efficiency on-line. The approach uses an optimal estimator called a Kalman filter. The filter is designed to estimate the compressor efficiency using only data from the engine s sensors as input.
Jefferds, Maria Elena D; Flores-Ayala, Rafael
2015-12-01
Lack of monitoring capacity is a key barrier for nutrition interventions and limits programme management, decision making and programme effectiveness in many low-income and middle-income countries. A 2011 global assessment reported lack of monitoring capacity was the top barrier for home fortification interventions, such as micronutrient powders or lipid-based nutrient supplements. A Manual for Developing and Implementing Monitoring Systems for Home Fortification Interventions was recently disseminated. It is comprehensive and describes monitoring concepts and frameworks and includes monitoring tools and worksheets. The monitoring manual describes the steps of developing and implementing a monitoring system for home fortification interventions, including identifying and engaging stakeholders; developing a programme description including logic model and logical framework; refining the purpose of the monitoring system, identifying users and their monitoring needs; describing the design of the monitoring system; developing indicators; describing the core components of a comprehensive monitoring plan; and considering factors related to stage of programme development, sustainability and scale up. A fictional home fortification example is used throughout the monitoring manual to illustrate these steps. The monitoring manual is a useful tool to support the development and implementation of home fortification intervention monitoring systems. In the context of systematic capacity gaps to design, implement and monitor nutrition interventions in many low-income and middle-income countries, the dissemination of new tools, such as monitoring manuals may have limited impact without additional attention to strengthening other individual, organisational and systems levels capacities. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Efficient monitoring of CRAB jobs at CMS
NASA Astrophysics Data System (ADS)
Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.
Efficient Monitoring of CRAB Jobs at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J. M.D.; Balcas, J.; Belforte, S.
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates themore » design choices and gives a report on our experience with the tools we developed and the external ones we used.« less
James T. Peterson; Sherry P. Wollrab
1999-01-01
Natural resource managers in the Inland Northwest need tools for assessing the success or failure of conservation policies and the impacts of management actions on fish and fish habitats. Effectiveness monitoring is one such potential tool, but there are currently no established monitoring protocols. Since 1991, U.S. Forest Service biologists have used the standardized...
Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen
2016-04-01
To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen
Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less
Source Water Quality Monitoring
Presentation will provide background information on continuous source water monitoring using online toxicity monitors and cover various tools available. Conceptual and practical aspects of source water quality monitoring will be discussed.
Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2015-09-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their definitions of global coverages intended to ensure the needs of major global and international organizations (UNFCCC and IPCC) are met as a core objective. Consider how new optimization tools like rule-based engines (RBES) offer alternative methods of evaluating collaborative architectures and constellations? What would the trade space of optimized operational climate monitoring architectures of ECV look like? Third, using the RBES tool kit (2014) demonstrate with application to a climate centric rule-based decision engine - optimizing architectural trades of earth observation satellite systems, allowing comparison(s) to existing architectures and gaining insights for global collaborative architectures. How difficult is it to pull together an optimized climate case study - utilizing for example 12 climate based instruments on multiple existing platforms and nominal handful of orbits; for best cost and performance benefits against the collection requirements of representative set of ECV. How much effort and resources would an organization expect to invest to realize these analysis and utility benefits?
Analysing wind farm efficiency on complex terrains
NASA Astrophysics Data System (ADS)
Castellani, Francesco; Astolfi, Davide; Terzi, Ludovico; Schaldemose Hansen, Kurt; Sanz Rodrigo, Javier
2014-06-01
Actual performances of onshore wind farms are deeply affected both by wake interactions and terrain complexity: therefore monitoring how the efficiency varies with the wind direction is a crucial task. Polar efficiency plot is therefore a useful tool for monitoring wind farm performances. The approach deserves careful discussion for onshore wind farms, where orography and layout commonly affect performance assessment. The present work deals with three modern wind farms, owned by Sorgenia Green, located on hilly terrains with slopes from gentle to rough. Further, onshore wind farm of Nprrekffir Enge has been analysed as a reference case: its layout is similar to offshore wind farms and the efficiency is mainly driven by wakes. It is shown and justified that terrain complexity imposes a novel and more consistent way for defining polar efficiency. Dependency of efficiency on wind direction, farm layout and orography is analysed and discussed. Effects of atmospheric stability have been also investigated through MERRA reanalysis data from NASA satellites. Monin-Obukhov Length has been used to discriminate climate regimes.
Design and Evaluation of Novel Textile Wearable Systems for the Surveillance of Vital Signals.
Trindade, Isabel G; Machado da Silva, José; Miguel, Rui; Pereira, Madalena; Lucas, José; Oliveira, Luís; Valentim, Bruno; Barreto, Jorge; Santos Silva, Manuel
2016-09-24
This article addresses the design, development, and evaluation of T-shirt prototypes that embed novel textile sensors for the capture of cardio and respiratory signals. The sensors are connected through textile interconnects to either an embedded custom-designed data acquisition and transmission unit or to snap fastener terminals for connection to external monitoring devices. The performance of the T-shirt prototype is evaluated in terms of signal-to-noise ratio amplitude and signal interference caused by baseline wander and motion artefacts, through laboratory tests with subjects in standing and walking conditions. Performance tests were also conducted in a hospital environment using a T-shirt prototype connected to a commercial three-channel Holter monitoring device. The textile sensors and interconnects were realized with the assistance of an industrial six-needle digital embroidery tool and their resistance to wear addressed with normalized tests of laundering and abrasion. The performance of these wearable systems is discussed, and pathways and methods for their optimization are highlighted.
Design and Evaluation of Novel Textile Wearable Systems for the Surveillance of Vital Signals
Trindade, Isabel G.; Machado da Silva, José; Miguel, Rui; Pereira, Madalena; Lucas, José; Oliveira, Luís; Valentim, Bruno; Barreto, Jorge; Santos Silva, Manuel
2016-01-01
This article addresses the design, development, and evaluation of T-shirt prototypes that embed novel textile sensors for the capture of cardio and respiratory signals. The sensors are connected through textile interconnects to either an embedded custom-designed data acquisition and transmission unit or to snap fastener terminals for connection to external monitoring devices. The performance of the T-shirt prototype is evaluated in terms of signal-to-noise ratio amplitude and signal interference caused by baseline wander and motion artefacts, through laboratory tests with subjects in standing and walking conditions. Performance tests were also conducted in a hospital environment using a T-shirt prototype connected to a commercial three-channel Holter monitoring device. The textile sensors and interconnects were realized with the assistance of an industrial six-needle digital embroidery tool and their resistance to wear addressed with normalized tests of laundering and abrasion. The performance of these wearable systems is discussed, and pathways and methods for their optimization are highlighted. PMID:27669263
Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Schwabacher, Mark; Martin, Rodney; Waterman, Robert; Oostdyk, Rebecca; Ossenfort, John; Matthews, Bryan
2010-01-01
Automating prelaunch diagnostics for launch vehicles offers three potential benefits. First, it potentially improves safety by detecting faults that might otherwise have been missed so that they can be corrected before launch. Second, it potentially reduces launch delays by more quickly diagnosing the cause of anomalies that occur during prelaunch processing. Reducing launch delays will be critical to the success of NASA's planned future missions that require in-orbit rendezvous. Third, it potentially reduces costs by reducing both launch delays and the number of people needed to monitor the prelaunch process. NASA is currently developing the Ares I launch vehicle to bring the Orion capsule and its crew of four astronauts to low-earth orbit on their way to the moon. Ares I-X will be the first unmanned test flight of Ares I. It is scheduled to launch on October 27, 2009. The Ares I-X Ground Diagnostic Prototype is a prototype ground diagnostic system that will provide anomaly detection, fault detection, fault isolation, and diagnostics for the Ares I-X first-stage thrust vector control (TVC) and for the associated ground hydraulics while it is in the Vehicle Assembly Building (VAB) at John F. Kennedy Space Center (KSC) and on the launch pad. It will serve as a prototype for a future operational ground diagnostic system for Ares I. The prototype combines three existing diagnostic tools. The first tool, TEAMS (Testability Engineering and Maintenance System), is a model-based tool that is commercially produced by Qualtech Systems, Inc. It uses a qualitative model of failure propagation to perform fault isolation and diagnostics. We adapted an existing TEAMS model of the TVC to use for diagnostics and developed a TEAMS model of the ground hydraulics. The second tool, Spacecraft Health Inference Engine (SHINE), is a rule-based expert system developed at the NASA Jet Propulsion Laboratory. We developed SHINE rules for fault detection and mode identification. The prototype uses the outputs of SHINE as inputs to TEAMS. The third tool, the Inductive Monitoring System (IMS), is an anomaly detection tool developed at NASA Ames Research Center and is currently used to monitor the International Space Station Control Moment Gyroscopes. IMS automatically "learns" a model of historical nominal data in the form of a set of clusters and signals an alarm when new data fails to match this model. IMS offers the potential to detect faults that have not been modeled. The three tools have been integrated and deployed to Hangar AE at KSC where they interface with live data from the Ares I-X vehicle and from the ground hydraulics. The outputs of the tools are displayed on a console in Hangar AE, one of the locations from which the Ares I-X launch will be monitored. The full paper will describe how the prototype performed before the launch. It will include an analysis of the prototype's accuracy, including false-positive rates, false-negative rates, and receiver operating characteristics (ROC) curves. It will also include a description of the prototype's computational requirements, including CPU usage, main memory usage, and disk usage. If the prototype detects any faults during the prelaunch period then the paper will include a description of those faults. Similarly, if the prototype has any false alarms then the paper will describe them and will attempt to explain their causes.
Sarraguça, Mafalda C; Paulo, Ana; Alves, Madalena M; Dias, Ana M A; Lopes, João A; Ferreira, Eugénio C
2009-10-01
The performance of an activated sludge reactor can be significantly enhanced through use of continuous and real-time process-state monitoring, which avoids the need to sample for off-line analysis and to use chemicals. Despite the complexity associated with wastewater treatment systems, spectroscopic methods coupled with chemometric tools have been shown to be powerful tools for bioprocess monitoring and control. Once implemented and optimized, these methods are fast, nondestructive, user friendly, and most importantly, they can be implemented in situ, permitting rapid inference of the process state at any moment. In this work, UV-visible and NIR spectroscopy were used to monitor an activated sludge reactor using in situ immersion probes connected to the respective analyzers by optical fibers. During the monitoring period, disturbances to the biological system were induced to test the ability of each spectroscopic method to detect the changes in the system. Calibration models based on partial least squares (PLS) regression were developed for three key process parameters, namely chemical oxygen demand (COD), nitrate concentration (N-NO(3)(-)), and total suspended solids (TSS). For NIR, the best results were achieved for TSS, with a relative error of 14.1% and a correlation coefficient of 0.91. The UV-visible technique gave similar results for the three parameters: an error of approximately 25% and correlation coefficients of approximately 0.82 for COD and TSS and 0.87 for N-NO(3)(-) . The results obtained demonstrate that both techniques are suitable for consideration as alternative methods for monitoring and controlling wastewater treatment processes, presenting clear advantages when compared with the reference methods for wastewater treatment process qualification.
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Na; Zhao, Dandan; Jiang, Jie; You, Hong
2017-09-01
Real-time monitoring of photocatalytic reactions facilitates the elucidation of the mechanisms of the reactions. However, suitable tools for real-time monitoring are lacking. Herein, a novel method based on droplet spray ionization named substrate-coated illumination droplet spray ionization (SCI-DSI) for direct analysis of photocatalytic reaction solution is reported. SCI-DSI addresses many of the analytical limitations of electrospray ionization (ESI) for analysis of photocatalytic-reaction intermediates, and has potential for both in situ analysis and real-time monitoring of photocatalytic reactions. In SCI-DSI-mass spectrometry (MS), a photocatalytic reaction occurs by loading sample solutions onto the substrate-coated cover slip and by applying UV light above the modified slip; one corner of this slip adjacent to the inlet of a mass spectrometer is the high-electric-field location for launching a charged-droplet spray. After both testing and optimizing the performance of SCI-DSI, the value of this method for in situ analysis and real-time monitoring of photocatalytic reactions was demonstrated by the removal of cyclophosphamide (CP) in TiO2/UV. Reaction times ranged from seconds to minutes, and the proposed reaction intermediates were captured and identified by tandem mass spectrometry. Moreover, the free hydroxyl radical (·OH) was identified as the main radicals for CP removal. These results show that SCI-DSI is suitable for in situ analysis and real-time monitoring of CP removal under TiO2-based photocatalytic reactions. SCI-DSI is also a potential tool for in situ analysis and real-time assessment of the roles of radicals during CP removal under TiO2-based photocatalytic reactions. Graphical Abstract[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Tuohy, Eimear; Clerc, Sebastien; Politi, Eirini; Mangin, Antoine; Datcu, Mihai; Vignudelli, Stefano; Illuzzi, Diomede; Craciunescu, Vasile; Aspetsberger, Michael
2017-04-01
The Coastal Thematic Exploitation Platform (C-TEP) is an on-going European Space Agency (ESA) funded project to develop a web service dedicated to the observation of the coastal environment and to support coastal management and monitoring. For over 20 years ESA satellites have provided a wealth of environmental data. The availability of an ever increasing volume of environmental data from satellite remote sensing provides a unique opportunity for exploratory science and the development of coastal applications. However, the diversity and complexity of EO data available, the need for efficient data access, information extraction, data management and high spec processing tools pose major challenges to achieving its full potential in terms of Big Data exploitation. C-TEP will provide a new means to handle the technical challenges of the observation of coastal areas and contribute to improved understanding and decision-making with respect to coastal resources and environments. C-TEP will unlock coastal knowledge and innovation as a collaborative, virtual work environment providing access to a comprehensive database of coastal Earth Observation (EO) data, in-situ data, model data and the tools and processors necessary to fully exploit these vast and heterogeneous datasets. The cloud processing capabilities provided, allow users to perform heavy processing tasks through a user-friendly Graphical User Interface (GUI). A connection to the PEPS (Plateforme pour l'Exploitation des Produits Sentinel) archive will provide data from Sentinel missions 1, 2 and 3. Automatic comparison tools will be provided to exploit the in-situ datasets in synergy with EO data. In addition, users may develop, test and share their own advanced algorithms for the extraction of coastal information. Algorithm validation will be facilitated by the capabilities to compute statistics over long time-series. Finally, C-TEP subscription services will allow users to perform automatic monitoring of some key indicators (water quality, water level, vegetation stress) from Near Real Time data. To demonstrate the benefits of C-TEP, three pilot cases have been implemented, each addressing specific, and highly topical, coastal research needs. These applications include change detection in land and seabed cover, water quality monitoring and reporting, and a coastal altimetry processor. The pilot cases demonstrate the wide scope of C-TEP and how it may contribute to European projects and international coastal networks. In conclusion, CTEP aims to provide new services and tools which will revolutionise accessibility to EO datasets, support a multi-disciplinary research collaboration, and the provision of long-term data series and innovative services for the monitoring of coastal regions.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI
Park, Joon Bum; Choi, Hyuk Joong; Lee, Jeong Hun; Kang, Bo Seung
2013-08-01
We examined the potential of the iPad 2 as a teleradiologic tool for evaluating brain computed tomography (CT) with subtle hemorrhage in the conventional lighting conditions which are common situations in the remote CT reading. The comparison of the clinician's performance was undertaken through detecting hemorrhage by the iPad 2 and the clinical liquid crystal display (LCD) monitor. We selected 100 brain CT exams performed for head trauma or headache. Fifty had subtle radiological signs of intracranial hemorrhage (ICH), while the other 50 showed no significant abnormality. Five emergency medicine physicians reviewed these brain CT scans using the iPad 2 and the LCD monitor, scoring the probability of ICH on each exam on a five-point scale. Result showed high sensitivities and specificities in both devices. We generated receiver operating characteristic curves and calculated the average area under the curve of the iPad 2 and the LCD (0.935 and 0.900). Using the iPad 2 and reliable internet connectivity, clinicians can provide remote evaluation of brain CT with subtle hemorrhage under suboptimal viewing condition. Considering the distinct advantages of the iPad 2, the popular out-of-hospital use of mobile CT teleradiology would be anticipated soon.
Routine hand hygiene audit by direct observation: has nemesis arrived?
Gould, D J; Drey, N S; Creedon, S
2011-04-01
Infection prevention and control experts have expended valuable health service time developing and implementing tools to audit health workers' hand hygiene compliance by direct observation. Although described as the 'gold standard' approach to hand hygiene audit, this method is labour intensive and may be inaccurate unless performed by trained personnel who are regularly monitored to ensure quality control. New technological devices have been developed to generate 'real time' data, but the cost of installing them and using them during routine patient care has not been evaluated. Moreover, they do not provide as much information about the hand hygiene episode or the context in which hand hygiene has been performed as direct observation. Uptake of hand hygiene products offers an inexpensive alternative to direct observation. Although product uptake would not provide detailed information about the hand hygiene episode or local barriers to compliance, it could be used as a continuous monitoring tool. Regular inspection of the data by infection prevention and control teams and clinical staff would indicate when and where direct investigation of practice by direct observation and questioning of staff should be targeted by highly trained personnel to identify local problems and improve practice. Copyright © 2011 the Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Introduction of software tools for epidemiological surveillance in infection control in Colombia
Motoa, Gabriel; Vallejo, Marta; Blanco, Víctor M; Correa, Adriana; de la Cadena, Elsa; Villegas, María Virginia
2015-01-01
Introduction: Healthcare-Associated Infections (HAI) are a challenge for patient safety in the hospitals. Infection control committees (ICC) should follow CDC definitions when monitoring HAI. The handmade method of epidemiological surveillance (ES) may affect the sensitivity and specificity of the monitoring system, while electronic surveillance can improve the performance, quality and traceability of recorded information. Objective: To assess the implementation of a strategy for electronic surveillance of HAI, Bacterial Resistance and Antimicrobial Consumption by the ICC of 23 high-complexity clinics and hospitals in Colombia, during the period 2012-2013. Methods: An observational study evaluating the introduction of electronic tools in the ICC was performed; we evaluated the structure and operation of the ICC, the degree of incorporation of the software HAI Solutions and the adherence to record the required information. Results: Thirty-eight percent of hospitals (8/23) had active surveillance strategies with standard criteria of the CDC, and 87% of institutions adhered to the module of identification of cases using the HAI Solutions software. In contrast, compliance with the diligence of the risk factors for device-associated HAIs was 33%. Conclusions: The introduction of ES could achieve greater adherence to a model of active surveillance, standardized and prospective, helping to improve the validity and quality of the recorded information. PMID:26309340
Roslan, Muhammad Aidil; Ngui, Romano; Vythilingam, Indra; Sulaiman, Wan Yusoff Wan
2017-12-01
The present study compared the performance of sticky traps in order to identify the most effective and practical trap for capturing Aedes aegypti and Aedes albopictus mosquitoes. Three phases were conducted in the study, with Phase 1 evaluating the five prototypes (Models A, B, C, D, and E) of sticky trap release-and-recapture using two groups of mosquito release numbers (five and 50) that were released in each replicate. Similarly, Phase 2 compared the performance between Model E and the classical ovitrap that had been modified (sticky ovitrap), using five and 50 mosquito release numbers. Further assessment of both traps was carried out in Phase 3, in which both traps were installed in nine sampling grids. Results from Phase 1 showed that Model E was the trap that recaptured higher numbers of mosquitoes when compared to Models A, B, C, and D. Further assessment between Model E and the modified sticky ovitrap (known as Model F) found that Model F outperformed Model E in both Phases 2 and 3. Thus, Model F was selected as the most effective and practical sticky trap, which could serve as an alternative tool for monitoring and controlling dengue vectors in Malaysia. © 2017 The Society for Vector Ecology.
Introduction of software tools for epidemiological surveillance in infection control in Colombia.
Hernández-Gómez, Cristhian; Motoa, Gabriel; Vallejo, Marta; Blanco, Víctor M; Correa, Adriana; de la Cadena, Elsa; Villegas, María Virginia
2015-01-01
Healthcare-Associated Infections (HAI) are a challenge for patient safety in the hospitals. Infection control committees (ICC) should follow CDC definitions when monitoring HAI. The handmade method of epidemiological surveillance (ES) may affect the sensitivity and specificity of the monitoring system, while electronic surveillance can improve the performance, quality and traceability of recorded information. To assess the implementation of a strategy for electronic surveillance of HAI, Bacterial Resistance and Antimicrobial Consumption by the ICC of 23 high-complexity clinics and hospitals in Colombia, during the period 2012-2013. An observational study evaluating the introduction of electronic tools in the ICC was performed; we evaluated the structure and operation of the ICC, the degree of incorporation of the software HAI Solutions and the adherence to record the required information. Thirty-eight percent of hospitals (8/23) had active surveillance strategies with standard criteria of the CDC, and 87% of institutions adhered to the module of identification of cases using the HAI Solutions software. In contrast, compliance with the diligence of the risk factors for device-associated HAIs was 33%. The introduction of ES could achieve greater adherence to a model of active surveillance, standardized and prospective, helping to improve the validity and quality of the recorded information.
Nucleic acids-based tools for ballast water surveillance, monitoring, and research
Understanding the risks of biological invasion posed by ballast water—whether in the context of compliance testing, routine monitoring, or basic research—is fundamentally an exercise in biodiversity assessment, and as such should take advantage of the best tools avail...
Weather and atmosphere observation with the ATOM all-sky camera
NASA Astrophysics Data System (ADS)
Jankowsky, Felix; Wagner, Stefan
2015-03-01
The Automatic Telescope for Optical Monitoring (ATOM) for H.E.S.S. is an 75 cm optical telescope which operates fully automated. As there is no observer present during observation, an auxiliary all-sky camera serves as weather monitoring system. This device takes an all-sky image of the whole sky every three minutes. The gathered data then undergoes live-analysis by performing astrometric comparison with a theoretical night sky model, interpreting the absence of stars as cloud coverage. The sky monitor also serves as tool for a meteorological analysis of the observation site of the the upcoming Cherenkov Telescope Array. This overview covers design and benefits of the all-sky camera and additionally gives an introduction into current efforts to integrate the device into the atmosphere analysis programme of H.E.S.S.
Source water monitoring and biomonitoring systems
Presentation will provide background information on continuous source water monitoring using online toxicity monitors and cover various tools available. Conceptual and practical aspects of source water quality monitoring will be discussed.
NASA Astrophysics Data System (ADS)
Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.
2017-12-01
The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.
Afrin, Lawrence B; Arana, George W; Medio, Franklin J; Ybarra, Angela F N; Clarke, Harry S
2006-05-01
Accreditation organizations, financial stakeholders, legal systems, and regulatory agencies have increased the need for accountability in educational processes and curricular outcomes of graduate medical education. This demand for greater programmatic monitoring has placed pressure on institutions with graduate medical education (GME) programs to develop greater oversight of these programs. Meeting these challenges requires development of new GME management strategies and tools for institutional GME administrators to scrutinize programs, while still allowing these programs the autonomy to develop and implement educational methods to meet their unique training needs. At the Medical University of South Carolina (MUSC), senior administrators in the college of medicine felt electronic information management was a critical strategy for success and thus proceeded to carefully select an electronic residency management system (ERMS) to provide functionality for both individual programs and the GME enterprise as a whole. Initial plans in 2002 for a phased deployment had to be changed to a much more rapid deployment due to regulatory issues. Extensive communication and cooperation among MUSC's GME leaders resulted in a successful deployment in 2003. Evaluation completion rates have substantially improved, duty hours are carefully monitored, patient safety has improved through more careful oversight of residents' procedural privileges, regulators have been pleased, and central GME administrative visibility of program performance has dramatically improved. The system is now being expanded to MUSC's medical school and other health professions colleges. The authors discuss lessons learned and opportunities and challenges ahead, which include improving tracking of development of procedural competency, establishing and monitoring program performance standards, and integrating the ERMS with GME reimbursement systems.
NASA Astrophysics Data System (ADS)
Marotta, Enrica; Avino, Rosario; Avvisati, Gala; Belviso, Pasquale; Caliro, Stefano; Caputo, Teresa; Carandente, Antonio; Peluso, Rosario; Sangianantoni, Agata; Sansivero, Fabio; Vilardo, Giuseppe
2017-04-01
Last years have been characterized by a fast development of Remotely Piloted Aircraft Systems which are becoming cheaper, lighter and more powerful. The concurrent development of high resolution, lightweight and energy saving sensors sometimes specifically designed for air-borne applications are together rapidly changing the way in which it is possible to perform monitoring and surveys in hazardous environments such as volcanoes. An example of this convergence is the new methodology we are currently developing at the INGV-Osservatorio Vesuviano for the estimation of the thermal energy release of volcanic diffuse degassing areas using the ground temperatures from thermal infrared images. Preliminary experiments, carried out during many-years campaigns performed inside at La Solfatara crater by using thermal infrared images and K type thermocouples inserted into the ground at various depths, found a correlation between surface temperature and shallow gradient. Due to the large extent of areas affected by thermal anomalies, an effective and expedite tool to acquire the IR images is a RPAS equipped with high-resolution thermal and visible cameras. These acquisitions allow to quickly acquire the data to produce a heat release map. This map is then orthorectified and geocoded in order to be superimposed on digital terrain models or on the orthophotogrammetric mosaic obtained after processing photos acquired by RPAS. Such expedite maps of heat flux, taking in account accurate filtering of atmospheric influence, represents a useful tool for volcanic surveillance monitoring purposes. In order to start all the activities of these drones we had to acquire all necessary permissions required by the complex Italian normative.
Balbale, Salva N.; Trivedi, Itishree; O’Dwyer, Linda C.; McHugh, Megan C.; Evans, Charlesnika T.; Jordan, Neil; Keefer, Laurie A.
2018-01-01
Background Scoping reviews are preliminary assessments intended to characterize the extent and nature of emerging research evidence, identify literature gaps, and offer directions for future research. We conducted a systematic scoping review to describe published scientific literature on strategies to identify and reduce opioid misuse among patients with gastrointestinal (GI) symptoms and disorders. Methods We performed structured keyword searches to identify manuscripts published through June 2016 in the PubMed MEDLINE, Embase, Cochrane Central Register of Controlled Trials, Scopus, and Web of Science databases to extract original research articles that described health care practices, tools or interventions to identify and reduce opioid misuse among GI patients. The Chronic Care Model (CCM) was used to classify the strategies presented. Results Twelve articles met the inclusion criteria. A majority of studies used quasi-experimental or retrospective cohort study designs. Most studies addressed the CCM’s clinical information systems element. Seven studies involved identification of opioid misuse through prescription drug monitoring and opioid misuse screening tools. Four studies discussed reductions in opioid use by harnessing drug monitoring data and individual care plans, and implementing self-management and opioid detoxification interventions. One study described drug monitoring and an audit-and-feedback intervention to both identify and reduce opioid misuse. Greatest reductions in opioid misuse were observed when drug monitoring, self-management, or audit-and-feedback interventions were used. Conclusions Prescription drug monitoring and self-management interventions may be promising strategies to identify and reduce opioid misuse in gastrointestinal care. Rigorous, empirical research is needed to evaluate the longer-term impact of these strategies. PMID:28780607
Balbale, Salva N; Trivedi, Itishree; O'Dwyer, Linda C; McHugh, Megan C; Evans, Charlesnika T; Jordan, Neil; Keefer, Laurie A
2017-10-01
Scoping reviews are preliminary assessments intended to characterize the extent and nature of emerging research evidence, identify literature gaps, and offer directions for future research. We conducted a systematic scoping review to describe published scientific literature on strategies to identify and reduce opioid misuse among patients with gastrointestinal (GI) symptoms and disorders. We performed structured keyword searches to identify manuscripts published through June 2016 in the PubMed MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Scopus, and Web of Science databases to extract original research articles that described healthcare practices, tools, or interventions to identify and reduce opioid misuse among GI patients. The Chronic Care Model (CCM) was used to classify the strategies presented. Twelve articles met the inclusion criteria. A majority of studies used quasi-experimental or retrospective cohort study designs. Most studies addressed the CCM's clinical information systems element. Seven studies involved identification of opioid misuse through prescription drug monitoring and opioid misuse screening tools. Four studies discussed reductions in opioid use by harnessing drug monitoring data and individual care plans, and implementing self-management and opioid detoxification interventions. One study described drug monitoring and an audit-and-feedback intervention to both identify and reduce opioid misuse. Greatest reductions in opioid misuse were observed when drug monitoring, self-management, or audit-and-feedback interventions were used. Prescription drug monitoring and self-management interventions may be promising strategies to identify and reduce opioid misuse in GI care. Rigorous, empirical research is needed to evaluate the longer-term impact of these strategies.
Research on intelligent monitoring technology of machining process
NASA Astrophysics Data System (ADS)
Wang, Taiyong; Meng, Changhong; Zhao, Guoli
1995-08-01
Based upon research on sound and vibration characteristics of tool condition, we explore the multigrade monitoring system which takes single-chip microcomputers as the core hardware. By using the specially designed pickup true signal devices, we can more effectively do the intelligent multigrade monitoring and forecasting, and furthermore, we can build the tool condition models adaptively. This is the key problem in FMS, CIMS, and even the IMS.
Robert E. Kennedy; Philip A. Townsend; John E. Gross; Warren B. Cohen; Paul Bolstad; Wang Y. Q.; Phyllis Adams
2009-01-01
Remote sensing provides a broad view of landscapes and can be consistent through time, making it an important tool for monitoring and managing protected areas. An impediment to broader use of remote sensing science for monitoring has been the need for resource managers to understand the specialized capabilities of an ever-expanding array of image sources and analysis...
Validity evidence as a key marker of quality of technical skill assessment in OTL-HNS.
Labbé, Mathilde; Young, Meredith; Nguyen, Lily H P
2018-01-13
Quality monitoring of assessment practices should be a priority in all residency programs. Validity evidence is one of the main hallmarks of assessment quality and should be collected to support the interpretation and use of assessment data. Our objective was to identify, synthesize, and present the validity evidence reported supporting different technical skill assessment tools in otolaryngology-head and neck surgery (OTL-HNS). We performed a secondary analysis of data generated through a systematic review of all published tools for assessing technical skills in OTL-HNS (n = 16). For each tool, we coded validity evidence according to the five types of evidence described by the American Educational Research Association's interpretation of Messick's validity framework. Descriptive statistical analyses were conducted. All 16 tools included in our analysis were supported by internal structure and relationship to variables validity evidence. Eleven articles presented evidence supporting content. Response process was discussed only in one article, and no study reported on evidence exploring consequences. We present the validity evidence reported for 16 rater-based tools that could be used for work-based assessment of OTL-HNS residents in the operating room. The articles included in our review were consistently deficient in evidence for response process and consequences. Rater-based assessment tools that support high-stakes decisions that impact the learner and programs should include several sources of validity evidence. Thus, use of any assessment should be done with careful consideration of the context-specific validity evidence supporting score interpretation, and we encourage deliberate continual assessment quality-monitoring. NA. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
Sleep As A Strategy For Optimizing Performance.
Yarnell, Angela M; Deuster, Patricia
2016-01-01
Recovery is an essential component of maintaining, sustaining, and optimizing cognitive and physical performance during and after demanding training and strenuous missions. Getting sufficient amounts of rest and sleep is key to recovery. This article focuses on sleep and discusses (1) why getting sufficient sleep is important, (2) how to optimize sleep, and (3) tools available to help maximize sleep-related performance. Insufficient sleep negatively impacts safety and readiness through reduced cognitive function, more accidents, and increased military friendly-fire incidents. Sufficient sleep is linked to better cognitive performance outcomes, increased vigor, and better physical and athletic performance as well as improved emotional and social functioning. Because Special Operations missions do not always allow for optimal rest or sleep, the impact of reduced rest and sleep on readiness and mission success should be minimized through appropriate preparation and planning. Preparation includes periods of "banking" or extending sleep opportunities before periods of loss, monitoring sleep by using tools like actigraphy to measure sleep and activity, assessing mental effectiveness, exploiting strategic sleep opportunities, and consuming caffeine at recommended doses to reduce fatigue during periods of loss. Together, these efforts may decrease the impact of sleep loss on mission and performance. 2016.
Ten Haaf, Twan; van Staveren, Selma; Iannetta, Danilo; Roelands, Bart; Meeusen, Romain; Piacentini, Maria F; Foster, Carl; Koenderman, Leo; Daanen, Hein A M; de Koning, Jos J
2018-04-01
Reaction time has been proposed as a training monitoring tool, but to date, results are equivocal. Therefore, it was investigated whether reaction time can be used as a monitoring tool to establish overreaching. The study included 30 subjects (11 females and 19 males, age: 40.8 [10.8] years, VO 2max : 51.8 [6.3] mL/kg/min) who participated in an 8-day cycling event. The external exercise load increased approximately 900% compared with the preparation period. Performance was measured before and after the event using a maximal incremental cycling test. Subjects with decreased performance after the event were classified as functionally overreached (FOR) and others as acutely fatigued (AF). A choice reaction time test was performed 2 weeks before (pre), 1 week after (post), and 5 weeks after (follow-up), as well as at the start and end of the event. A total of 14 subjects were classified as AF and 14 as FOR (2 subjects were excluded). During the event, reaction time at the end was 68 ms (95% confidence interval, 46-89) faster than at the start. Reaction time post event was 41 ms (95% confidence interval, 12-71) faster than pre event and follow-up was 55 ms faster (95% confidence interval, 26-83). The time by class interaction was not significant during (P = .26) and after (P = .43) the event. Correlations between physical performance and reaction time were not significant (all Ps > .30). No differences in choice reaction time between AF and FOR subjects were observed. It is suggested that choice reaction time is not valid for early detection of overreaching in the field.
Electrical impedance tomography.
Costa, Eduardo L V; Lima, Raul Gonzalez; Amato, Marcelo B P
2009-02-01
Electrical impedance tomography (EIT) is a noninvasive, radiation-free monitoring tool that allows real-time imaging of ventilation. The purpose of this article is to discuss the fundamentals of EIT and to review the use of EIT in critical care patients. In addition to its established role in describing the distribution of alveolar ventilation, EIT has been shown to be a useful tool to detect lung collapse and monitor lung recruitment, both regionally and on a global basis. EIT has also been used to diagnose with high sensitivity incident pneumothoraces during mechanical ventilation. Additionally, with injection of hypertonic saline as a contrast agent, it is possible to estimate ventilation/perfusion distributions. EIT is cheap, noninvasive and allows continuous monitoring of ventilation. It is gaining acceptance as a valuable monitoring tool for the care of critical patients.
A patient self-assessment tool for cardiac rehabilitation.
Phelan, C; Finnell, M D; Mottla, K A
1989-01-01
A patient self-assessment tool was designed, tested, and implemented to promote cardiac-specific data collection, based on Gordon's Functional Health Patterns, to maximize patient/family involvement in determining a plan of care, and to streamline primary nurses' documentation requirements. Retrospective and concurrent chart reviews provided data for quality assurance monitoring. The results of the monitoring demonstrated that the self-assessment tool markedly improved the patient-specific data base.
Overview of 'Omics Technologies for Military Occupational Health Surveillance and Medicine.
Bradburne, Christopher; Graham, David; Kingston, H M; Brenner, Ruth; Pamuku, Matt; Carruth, Lucy
2015-10-01
Systems biology ('omics) technologies are emerging as tools for the comprehensive analysis and monitoring of human health. In order for these tools to be used in military medicine, clinical sampling and biobanking will need to be optimized to be compatible with downstream processing and analysis for each class of molecule measured. This article provides an overview of 'omics technologies, including instrumentation, tools, and methods, and their potential application for warfighter exposure monitoring. We discuss the current state and the potential utility of personalized data from a variety of 'omics sources including genomics, epigenomics, transcriptomics, metabolomics, proteomics, lipidomics, and efforts to combine their use. Issues in the "sample-to-answer" workflow, including collection and biobanking are discussed, as well as national efforts for standardization and clinical interpretation. Establishment of these emerging capabilities, along with accurate xenobiotic monitoring, for the Department of Defense could provide new and effective tools for environmental health monitoring at all duty stations, including deployed locations. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
A Novel and Simple Spike Sorting Implementation.
Petrantonakis, Panagiotis C; Poirazi, Panayiota
2017-04-01
Monitoring the activity of multiple, individual neurons that fire spikes in the vicinity of an electrode, namely perform a Spike Sorting (SS) procedure, comprises one of the most important tools for contemporary neuroscience in order to reverse-engineer the brain. As recording electrodes' technology rabidly evolves by integrating thousands of electrodes in a confined spatial setting, the algorithms that are used to monitor individual neurons from recorded signals have to become even more reliable and computationally efficient. In this work, we propose a novel framework of the SS approach in which a single-step processing of the raw (unfiltered) extracellular signal is sufficient for both the detection and sorting of the activity of individual neurons. Despite its simplicity, the proposed approach exhibits comparable performance with state-of-the-art approaches, especially for spike detection in noisy signals, and paves the way for a new family of SS algorithms with the potential for multi-recording, fast, on-chip implementations.
Individualized Behavioral Health Monitoring Tool
NASA Technical Reports Server (NTRS)
Mollicone, Daniel
2015-01-01
Behavioral health risks during long-duration space exploration missions are among the most difficult to predict, detect, and mitigate. Given the anticipated extended duration of future missions and their isolated, extreme, and confined environments, there is the possibility that behavior conditions and mental disorders will develop among astronaut crew. Pulsar Informatics, Inc., has developed a health monitoring tool that provides a means to detect and address behavioral disorders and mental conditions at an early stage. The tool integrates all available behavioral measures collected during a mission to identify possible health indicator warning signs within the context of quantitatively tracked mission stressors. It is unobtrusive and requires minimal crew time and effort to train and utilize. The monitoring tool can be deployed in space analog environments for validation testing and ultimate deployment in long-duration space exploration missions.